WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] IO performance testing between dom0 and guests

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] IO performance testing between dom0 and guests
From: "Filip Sergeys" <filip.sergeys@xxxxxxxxxxxxxxxx>
Date: 07 Apr 2005 13:58:46 +0200
Delivery-date: Thu, 07 Apr 2005 11:58:31 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Hi,

I'm using debian testing as dom0 with xen 2.0.4 compiled from source. The machine is named intra2.
The guest OS is SuSE 9.1, named staging, and runs a dataserver (maxdb).
The disk layout is as follows:

disk1 160G
disk2 80G

partition /dev/sda3 5G running dom0
partition /dev/sda5 5G running guest; mounted as: 'phy:sda5,sda5,w'; containing the root installation
partition /dev/md0  software raid0 chunk size 64K build from /dev/sda8 and /dev/sdb1; mounted as 'phy:md0,sda7,w'; containing the database

My impression is that the database performance is less good compaired to running native on the system, so I started running iostat -xd 5 to see what is happening on the disks. Clearly, during queries the iowait is between 99 and 100% for an unusual long time.

I tried to test read performance with with hdparm -t  /dev/sda5 in dom0 ...
/dev/sda5 DOM0
Timing buffered disk reads:   66 MB in  3.04 seconds =  21.71 MB/sec
Timing buffered disk reads:   50 MB in  3.06 seconds =  16.34 MB/sec
Timing buffered disk reads:  108 MB in  3.02 seconds =  35.76 MB/sec
Timing buffered disk reads:  112 MB in  3.06 seconds =  36.60 MB/sec
Timing buffered disk reads:  166 MB in  3.01 seconds =  55.15 MB/sec
Timing buffered disk reads:  170 MB in  3.00 seconds =  56.67 MB/sec
Timing buffered disk reads:  172 MB in  3.03 seconds =  56.77 MB/sec
Timing buffered disk reads:  170 MB in  3.02 seconds =  56.29 MB/sec

... but ran into a strange phenomenon when the same command was run in the guest OS ...
/dev/sda5 DOM1
Timing buffered disk reads:   56 MB in  3.05 seconds =  18.36 MB/sec
Timing buffered disk reads:   94 MB in  3.02 seconds =  31.13 MB/sec
Timing buffered disk reads:  138 MB in  3.04 seconds =  45.39 MB/sec
Timing buffered disk reads:  172 MB in  3.00 seconds =  57.33 MB/sec
Timing buffered disk reads:  208 MB in  3.05 seconds =  68.20 MB/sec
Timing buffered disk reads:  246 MB in  3.07 seconds =  80.13 MB/sec
Timing buffered disk reads:  286 MB in  3.13 seconds =  91.37 MB/sec
Timing buffered disk reads:  322 MB in  3.15 seconds = 102.22 MB/sec
Timing buffered disk reads:  358 MB in  3.12 seconds = 114.74 MB/sec

... it's getting faster every time. Why is that?

Not getting answers from hdparm, it tried bonnie++ -d /tmp -r 200 -s 400 -n 0 -b -u root on both dom0 and guest. This is the result for dom0 (intra2) root disk

Version  1.03       ------Sequential Output------               --Sequential Input- --Random-
                    -Per Chr-           --Block--     -Rewrite-      -Per Chr-     --Block--       --Seeks--
Machine        Size K/sec %CP K/sec  %CP K/sec %CP   K/sec  %CP K/sec   %CP  /sec   %CP
intra2         400M 26427  98    51944  12    22950   1     17090  58    51638   3      195.4   0

This is the result for guest(staging)

Version 1.01d       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
staging        400M 132810  96 48009  11 53177   5 120590  99 +++++ +++ 623.1   0

DOM0,400M,26427  ,98,51944,12,22950 ,1,17090  ,58 ,51638,3  ,195.4,0
Guest ,400M,132810,96,48009 ,11,53177,5 ,120590,99,+++++,+++,623.1,0

For the raid0 device, these are the results:

Version  1.03       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
intra2         400M 26442  98 71103  16 31414   5 24701  94 69430   6 316.1   1


Version 1.01d       ------Sequential Output------ --Sequential Input- --Random-
                    -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
staging        400M 133632  96 69970  17 70488   8 120936  99 +++++ +++ 619.5   0

DOM0 , 400M,26442 ,98,71103,16,31414,5,24701 ,94,69430,6    ,316.1,1
staging,400M,133632,96,69970,17,70488,8,120936,99,+++++,+++,619.5,0

Based on what I saw with hdparm, can I trust the results from bonnie++?

If somebody knows how to tackle may IO testing problem, please let me know

Thanx in advance

Cheers,

Filip.

-- 
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
* System Engineer, Verzekeringen NV *
* www.verzekeringen.be              *
* Oostkaai 23 B-2170 Merksem        *
* 03/6416673 - 0477/340942          *
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
-- 
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
* System Engineer, Verzekeringen NV *
* www.verzekeringen.be              *
* Oostkaai 23 B-2170 Merksem        *
* 03/6416673 - 0477/340942          *
*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
<Prev in Thread] Current Thread [Next in Thread>