WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Disk IO trouble in Xen

To: "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-users] Disk IO trouble in Xen
From: Greg Hellings <greg.hellings@xxxxxxxxxxxx>
Date: Mon, 05 May 2008 14:46:11 -0700
Delivery-date: Mon, 05 May 2008 14:48:26 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <481AF557.7030901@xxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Aciu+W7brUb+EBrsEd2umQAX8t0tvQ==
Thread-topic: [Xen-users] Disk IO trouble in Xen
User-agent: Microsoft-Entourage/11.3.3.061214
I started this thinking I was having an IO problem.  Now, I'm not so sure.
I am using a pre-allocated disk, not a sparse file.

Write performance seems about the same for both

Dom0
[root@localhost ~]# dd if=/dev/zero of=/tmp/dummy count=2097152
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 29.8368 seconds, 36.0 MB/s

DomU
[root@localhost ~]# dd if=/dev/zero of=/tmp/dummy count=2097152
2097152+0 records in
2097152+0 records out
1073741824 bytes (1.1 GB) copied, 30.4716 seconds, 35.2 MB/s

Iozone shows very similar numbers on both the Dom0 and DomU.  The DomU seems
to freeze periodically and drop network connections.  I assumed this was IO
related because everything else seems fine and those hdparm numbers are so
far off.  Also heavy IO related tasks seem slow.  I'm fairly baffled now.

--
Greg


On 5/2/08 4:04 AM, "Sadique Puthen" <sputhenp@xxxxxxxxxx> wrote:

> Are you using a pre-allocated image or a sparse file? If the latter it's
> likely to have performance problems like you found?  What if you test
> with dd or iozone or bonnie++?
> 
> --Sadique
> 
> Greg Hellings wrote:
>> I'm having terrible IO trouble with an image based domu
>> 
>> 
>> This is the performance on the Dom0
>> [root@localhost /]# hdparm -tT /dev/sda
>> 
>> /dev/sda:
>>  Timing cached reads:   3228 MB in  2.00 seconds = 1617.22 MB/sec
>>  Timing buffered disk reads:  174 MB in  3.02 seconds =  57.60 MB/sec
>> 
>> 
>> And this is the performance on the DomU
>> [root@localhost ~]# hdparm -tT /dev/xvda
>> 
>> /dev/xvda:
>>  Timing cached reads:   3336 MB in  2.00 seconds = 1669.98 MB/sec
>>  Timing buffered disk reads:   26 MB in  3.13 seconds =   8.30 MB/sec
>> 
>> 
>> Is it normal for the performance difference to be so great?  I'm running
>> CentOS 5 and the DomU image is stored on a LVM ReiserFS formatted partition.
>> 
>> Here is the config for the DomU
>> name = "dddevdb"
>> uuid = "927f1e915813c2dec60c8f76c2716783"
>> maxmem = 3072
>> memory = 2042
>> vcpus = 2
>> bootloader = "/usr/bin/pygrub"
>> on_poweroff = "destroy"
>> on_reboot = "restart"
>> on_crash = "restart"
>> vfb = [ "type=vnc,vncunused=1,keymap=en-us" ]
>> disk = [ "tap:aio:/var/lib/xen/images/dddevdb.img,xvda,w"]
>> vif = [ "mac=00:16:3e:62:7a:d1,bridge=xenbr0" ]
>> 
>> 
>> _______________________________________________
>> Xen-users mailing list
>> Xen-users@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-users
>> 
>>   
> 


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users