WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Xen4.0.1 : slow Disk IO on DomU

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] Xen4.0.1 : slow Disk IO on DomU
From: Joost Roeleveld <joost@xxxxxxxxxxxx>
Date: Fri, 18 Mar 2011 09:00:43 +0100
Delivery-date: Fri, 18 Mar 2011 01:02:10 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4D82455E.6080808@xxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <AANLkTi=mJHRZ_iiLLU6aTX6tHQXZT27+BZ20irSzPrDB@xxxxxxxxxxxxxx> <20110317083208.AE3901131@xxxxxxxxxxxxxxxxx> <4D82455E.6080808@xxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/4.6 beta4 (Linux/2.6.36-gentoo-r5; KDE/4.6.0; x86_64; ; )
On Thursday 17 March 2011 18:31:10 Erwan RENIER wrote:
> Le 17/03/2011 09:31, Joost Roeleveld a écrit :
> > On Wednesday 16 March 2011 23:31:31 Erwan RENIER wrote:
> >> Hi,
> >> When i test the IO bandwidth it's pretty much slower on DomU :
> >> 
> >> Dom0 read  : 180MB/s write  : 60MB/s
> >> DomU read  : 40MB/s write :  6MB/s
> > 
> > Just did the same tests on my installation (not yet on Xen4):
> > Dom0:
> > # hdparm -Tt /dev/md5
> > 
> > /dev/md5:
> >   Timing cached reads:   6790 MB in  1.99 seconds = 3403.52 MB/sec
> >   Timing buffered disk reads:  1294 MB in  3.00 seconds = 430.94
> >   MB/sec
> > 
> > (md5 = 6-disk RAID-5 software raid)
> > 
> > # hdparm -Tt /dev/vg/domU_sdb1
> > 
> > /dev/vgvg/domU_sdb1:
> >   Timing cached reads:   6170 MB in  2.00 seconds = 3091.21 MB/sec
> >   Timing buffered disk reads:  1222 MB in  3.00 seconds = 407.24
> >   MB/sec
> > 
> > DomU:
> > # hdparm -Tt /dev/sdb1
> > 
> > /dev/sdb1:
> >   Timing cached reads:   7504 MB in  1.99 seconds = 3761.93 MB/sec
> >   Timing buffered disk reads:  792 MB in  3.00 seconds = 263.98 MB/sec
> > 
> > Like you, I do see some drop in performance, but not as severe as you
> > are
> > experiencing.
> > 
> >> DomU disks are Dom0 logical volumes, i use paravirtualized guests, the
> >> fs type is ext4.
> > 
> > How do you pass the disks to the domU?
> > I pass them as such:
> > disk = ['phy:vg/domU_sda1,sda1,w',
> > (rest of the partitions removed for clarity)
> 
> My DomU conf is like this :
> kernel =  "vmlinuz-2.6.32-5-xen-amd64"
> ramdisk = "initrd.img-2.6.32-5-xen-amd64"
> root = "/dev/mapper/pvops-root"
> memory = "512"
> disk = [ 'phy:vg0/p2p,xvda,w' , 'phy:vg0/mmd,xvdb1,w', 'phy:sde3,xvdb1,w' ]
> vif = [ 'bridge=eth0' ]
> vfb = [ 'type=vnc,vnclisten=0.0.0.0' ]
> keymap = 'fr'
> serial = 'pty'
> vcpus = 2
> on_reboot = 'restart'
> on_crash = 'restart'

seems ok to me.
Did you pin the dom0 to a dedicated cpu-core?

> > Either you are hitting a bug or it's a configuration issue.
> > What is the configuration for your domU? And specifically the way you
> > pass the LVs to the domU.
> 
> As you can see :
> xvda is a lv exported as a whole disk  with lvm  on it, so xvda2 is a lv
> from a vg  in a lv ( ext4 => lv => vg => pv => virtual disk => lv =>vg
> =>pv => raid5 =>disk )
> xvdb1 is a lv exported as a partition ( ext4 => virtual part => lv => vg
> => pv => raid5 => disk )
> xvdb2 is a physical partition  exported as a partition ( ext3 => virtual
> part => disk )
> 
> Curiously it seems the more complicated, the better it is :/

Yes, it does seem that way. Am wondering if adding more layers increases the 
amount of in-memory-caching which then leads to a higher "perceived" 
performance.

One other thing, I don't use "xvd*" for the device-names, but am still using 
"sd*". Wonder if that changes the way things behave internally?

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users