WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: poor domU VBD performance.

Andrew Theurer <habanero <at> us.ibm.com> writes:

> 
> > My dd command was always the same: "dd if=/dev/hdb6 bs=64k count=1000" and
> > it took 1.6 seconds on hdb6 and 2.2 seconds on hda1 when running in Dom0
> > and it took 4.6 seconds on hdb6 and 5.8 seconds on hda1 when running on
> > DomU. I did one experiment with count=10000 and it took ten times as long
> > in each of the four cases.
> >
> > I have done the following tests:
> > DomU : dd if=/dev/hdb6 of=/dev/null bs=1024k count=4000 ; duration 301 sec
> > DomU : dd if=/dev/hdb6 of=/dev/null bs=1024k count=4000 ; duration 370 sec
> >
> > Dom0 : dd if=/dev/hdb6 of=/dev/null bs=1024k count=4000 ; duration 115 sec
> > Dom0 : dd if=/dev/hda1 of=/dev/null bs=1024k count=4000 ; duration 140 sec
> 
> OK, I have produced this with both dd and o-direct now.  On o-direct, I 
needed 
> what was the effective dd block request size (128k) and I got similar 
> results.  My results are much worse, due to that I am driving 14 disks:
> 
> dom0: 153.5 MB/sec
> domU:  12.7 MB/sec
> 
> It looks like there might be a problem were we are not getting a timely 
> response back from dom0 VBD driver that the io request is complete, which 
> limits the number of outstanding requests to a level which cannot keep the 
> disk utilized well.  If you drive enough IO outstanding requests (which can 
> be done with either o-direct with large request or a much larger readahead 
> setting with buffered IO), it's not an issue. 
> 
> In the domU, can you try setting the readahead size to a much larger value 
> using hdparm? Something like hdparm -a 2028, then run dd?
> 
> -Andrew
> 

It's tuesday now, and I am working in the office using my two machines with 
the Promise controller. The two differ in that one is using ide disks, while 
the other, the newer one has sata disks. I have restricted myself to the 
elder computer. 

It has one disk, a Maxtor 6Y120L0, 120 G with a 2048 KB Cache. On that machine 
the disk is hde and the exported slice is hde1. The slice is not in use and I 
am running the os from a loop-backed file as rootfs. I have done a 

"dd if=/dev/hde1 of=/dev/null bs=1024k count=1024"

in domU. 

hdparm told that the default setup was 256k readahead.

I have tested the performance with the following readahead settings:

readahead    |     duration 
128 sectors  |     160 sec
256 sectors  |      76 sec
512 sectors  |      18.5 sec
1024 sectors |      19.5 sec
2048 sectors |     786 sec
1536 sectors |     775 sec
1200 sectors |     457 sec
1000 sectors |     20 sec
800 sectors  |     18.5 sec
600 sectors  |     18.5 sec  

dom0 takes 18.0 secs no matter of the readahead setting in Dom0 is.

Peter 


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel