WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Xen4.0.1 : slow Disk IO on DomU

To: Joost Roeleveld <joost@xxxxxxxxxxxx>
Subject: Re: [Xen-users] Xen4.0.1 : slow Disk IO on DomU
From: Erwan RENIER <erwan.renier@xxxxxxxxxxx>
Date: Fri, 18 Mar 2011 19:14:01 +0100
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 18 Mar 2011 11:15:23 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20110318080110.CC2832418@xxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <AANLkTi=mJHRZ_iiLLU6aTX6tHQXZT27+BZ20irSzPrDB@xxxxxxxxxxxxxx> <20110317083208.AE3901131@xxxxxxxxxxxxxxxxx> <4D82455E.6080808@xxxxxxxxxxx> <20110318080110.CC2832418@xxxxxxxxxxxxxxxxx>
Reply-to: erwan.renier@xxxxxxxxxxx
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; fr; rv:1.9.2.15) Gecko/20110303 Thunderbird/3.1.9
Le 18/03/2011 09:00, Joost Roeleveld a écrit :
On Thursday 17 March 2011 18:31:10 Erwan RENIER wrote:
Le 17/03/2011 09:31, Joost Roeleveld a écrit :
On Wednesday 16 March 2011 23:31:31 Erwan RENIER wrote:
Hi,
When i test the IO bandwidth it's pretty much slower on DomU :

Dom0 read  : 180MB/s write  : 60MB/s
DomU read  : 40MB/s write :  6MB/s
Just did the same tests on my installation (not yet on Xen4):
Dom0:
# hdparm -Tt /dev/md5

/dev/md5:
   Timing cached reads:   6790 MB in  1.99 seconds = 3403.52 MB/sec
   Timing buffered disk reads:  1294 MB in  3.00 seconds = 430.94
   MB/sec

(md5 = 6-disk RAID-5 software raid)

# hdparm -Tt /dev/vg/domU_sdb1

/dev/vgvg/domU_sdb1:
   Timing cached reads:   6170 MB in  2.00 seconds = 3091.21 MB/sec
   Timing buffered disk reads:  1222 MB in  3.00 seconds = 407.24
   MB/sec

DomU:
# hdparm -Tt /dev/sdb1

/dev/sdb1:
   Timing cached reads:   7504 MB in  1.99 seconds = 3761.93 MB/sec
   Timing buffered disk reads:  792 MB in  3.00 seconds = 263.98 MB/sec

Like you, I do see some drop in performance, but not as severe as you
are
experiencing.

DomU disks are Dom0 logical volumes, i use paravirtualized guests, the
fs type is ext4.
How do you pass the disks to the domU?
I pass them as such:
disk = ['phy:vg/domU_sda1,sda1,w',
(rest of the partitions removed for clarity)
My DomU conf is like this :
kernel =  "vmlinuz-2.6.32-5-xen-amd64"
ramdisk = "initrd.img-2.6.32-5-xen-amd64"
root = "/dev/mapper/pvops-root"
memory = "512"
disk = [ 'phy:vg0/p2p,xvda,w' , 'phy:vg0/mmd,xvdb1,w', 'phy:sde3,xvdb1,w' ]
vif = [ 'bridge=eth0' ]
vfb = [ 'type=vnc,vnclisten=0.0.0.0' ]
keymap = 'fr'
serial = 'pty'
vcpus = 2
on_reboot = 'restart'
on_crash = 'restart'
seems ok to me.
Did you pin the dom0 to a dedicated cpu-core?
Nop
Either you are hitting a bug or it's a configuration issue.
What is the configuration for your domU? And specifically the way you
pass the LVs to the domU.
As you can see :
xvda is a lv exported as a whole disk  with lvm  on it, so xvda2 is a lv
from a vg  in a lv ( ext4 =>  lv =>  vg =>  pv =>  virtual disk =>  lv =>vg
=>pv =>  raid5 =>disk )
xvdb1 is a lv exported as a partition ( ext4 =>  virtual part =>  lv =>  vg
=>  pv =>  raid5 =>  disk )
xvdb2 is a physical partition  exported as a partition ( ext3 =>  virtual
part =>  disk )

Curiously it seems the more complicated, the better it is :/
Yes, it does seem that way. Am wondering if adding more layers increases the
amount of in-memory-caching which then leads to a higher "perceived"
performance.

One other thing, I don't use "xvd*" for the device-names, but am still using
"sd*". Wonder if that changes the way things behave internally?
I doesn't change with sd*
I noticed that the cpu io wait occurs in domU ,nothing happen in dom0

Does someone knows a way to debug this ? at kernel level or in the hypervisor ? By the way how to get the hypervisor activity i don't think it appears in dom0.
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users