|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] Re: poor domU VBD performance.
> On Tuesday 29 March 2005 02:13, Ian Pratt wrote:
> > > It looks like there might be a problem were we are not
> > > getting a timely
> > > response back from dom0 VBD driver that the io request is
> > > complete, which
> > > limits the number of outstanding requests to a level which
> > > cannot keep the
> > > disk utilized well. If you drive enough IO outstanding
> > > requests (which can
> > > be done with either o-direct with large request or a much
> > > larger readahead
> > > setting with buffered IO), it's not an issue.
> >
> > Andrew, please could you try this with a 2.4 dom0, 2.6 domU.
>
> 2.4 might be a little while for me, as I an running Fedora core3 with udev.
> If anyone has any easy way to get around the hotplug/udev stuff, then I can
> do this.
You can run a populated /dev "underneath" the udev stuff quite happily;
e.g. if you boot into FC3 w/ udev do:
cd /dev/
tar zcpf /root/foo.tgz .
If you can boot from a rescue CD or sim, just mount your FC3
partition and untar the device nodes.
Works just fine.
> I did run a sequential read on a single disk again (using noop IO schedulers
> in both domains) with various request sizes with o_direct while capturing
> iostsat output. The results are interesting. I have included the data in a
> file because it would just line wrap an be unreadable in this email text.
> Notice the service commit times for domU tests. It's like the IO request
> queue is being plugged for a minimum of 10ms in dom0. Merges happening for
> >4K requests in dom0 (while hosting domU's IO) seem to support this.
[snip]
Ah - thanks for this -- will take a detailed look shortly.
cheers,
S.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|