WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] Big I/O performance difference between dom0 and domU

To: Xen-Users <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-users] Big I/O performance difference between dom0 and domU
From: Marcin Owsiany <marcin@xxxxxxxxxx>
Date: Tue, 17 Apr 2007 16:28:53 +0100
Delivery-date: Tue, 17 Apr 2007 08:28:25 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Mail-followup-to: Xen-Users <xen-users@xxxxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.13 (2006-08-11)
Hi,

I am setting up a dual CPU PowerEdge 2550 system with PERC 3Di
controller (aacraid) with 3 18GB disks in RAID5 and XEN 3.0.3-0-2,
credit scheduler, PAE (the Debian package in etch). This is not best
hardware, but what worries me more is the poor I/O performance in domU
compared to dom0.

With both domains having 500 MB of RAM, testing with bonnie++ on the
same 5GB LVM volume with xfs filesystem, with 4GB test data size, I'm
getting:

-----+-------------------+--------------------+-----------------------
Test | block read [kB/s] | block write [kB/s] | random seeks [/sec]
-----+-------------------+--------------------+-----------------------
dom0 | 10308             | 64806              | 325.3
domU |  7299             | 53469              | 265.6
-----+-------------------+--------------------+-----------------------
~drop|    30%            |    17%             |  18%
-----+-------------------+--------------------+-----------------------

The results are basically the same whether I use the default vcpu
arrangement (2 vcpus for dom0, one for domU) or set it to one vcpu per
domain, each pinned to a different physical cpu.

Any suggestions are welcome...


-- 
Marcin Owsiany <marcin@xxxxxxxxxx>              http://marcin.owsiany.pl/
GnuPG: 1024D/60F41216  FE67 DA2D 0ACA FC5E 3F75  D6F6 3A0D 8AA0 60F4 1216
 
"Every program in development at MIT expands until it can read mail."
                                                              -- Unknown

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users