WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: poor domU VBD performance.

To: Andrew Theurer <habanero@xxxxxxxxxx>
Subject: Re: [Xen-devel] Re: poor domU VBD performance.
From: Steven Hand <Steven.Hand@xxxxxxxxxxxx>
Date: Tue, 29 Mar 2005 20:13:36 +0100
Cc: Ian Pratt <m+Ian.Pratt@xxxxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, Peter Bier <peter_bier@xxxxxx>
Delivery-date: Tue, 29 Mar 2005 19:13:43 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: Your message of "Tue, 29 Mar 2005 12:39:42 MDT." <200503291239.42677.habanero@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
> On Tuesday 29 March 2005 02:13, Ian Pratt wrote:
> > > It looks like there might be a problem were we are not
> > > getting a timely
> > > response back from dom0 VBD driver that the io request is
> > > complete, which
> > > limits the number of outstanding requests to a level which
> > > cannot keep the
> > > disk utilized well.  If you drive enough IO outstanding
> > > requests (which can
> > > be done with either o-direct with large request or a much
> > > larger readahead
> > > setting with buffered IO), it's not an issue.
> >
> > Andrew, please could you try this with a 2.4 dom0, 2.6 domU.
> 
> 2.4 might be a little while for me, as I an running Fedora core3 with udev.  
> If anyone has any easy way to get around the hotplug/udev stuff, then I can 
> do this.

You can run a populated /dev "underneath" the udev stuff quite happily; 
e.g. if you boot into FC3 w/ udev do: 

  cd /dev/ 
  tar zcpf /root/foo.tgz . 

If you can boot from a rescue CD or sim, just mount your FC3 
partition and untar the device nodes.

Works just fine. 


> I did run a sequential read on a single disk again (using noop IO schedulers 
> in both domains) with various request sizes with o_direct while capturing 
> iostsat output.  The results are interesting.  I have included the data in a 
> file because it would just line wrap an be unreadable in this email text.  
> Notice the service commit times for domU tests.  It's like the IO request 
> queue is being plugged for a minimum of 10ms in dom0.  Merges happening for 
> >4K requests in dom0 (while hosting domU's IO) seem to support this.

[snip]
 

Ah - thanks for this -- will take a detailed look shortly. 

cheers,

S.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel