WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Strange Xen Disk I/O performance among DomUs

To: Jia Rao <rickenrao@xxxxxxxxx>
Subject: Re: [Xen-devel] Strange Xen Disk I/O performance among DomUs
From: William Pitcock <nenolod@xxxxxxxxxxxxxxxx>
Date: Wed, 01 Apr 2009 15:46:43 -0500
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 01 Apr 2009 13:48:18 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <994429490904010726id4cec82ld03ebc799e349b91@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <994429490904010726id4cec82ld03ebc799e349b91@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi,

On Wed, 2009-04-01 at 10:26 -0400, Jia Rao wrote:
> Hi all,
> 
> I tested the xen vm disk I/O this weekend and had some interesting
> observations:
> 
> I ran TPC-C benchmarks (mostly random small disk read) within two VMs
> (PV) with the exactly the same resource and software configuration. I
> started the benchmarks in the two VMs at the same time (started with a
> script, the time difference is within several ms). The Xen VM
> scheduler seems always favor one VM, which results in a 50% better
> performance over the other VM.  I changed the seqence of the VM
> creation and application starting order, the specific VM always got
> better performance, 30%-50% better. 

Xen does not actually handle the scheduling of disk I/O at the moment;
it leaves that up to the dom0. Which scheduler are you using in the
dom0? The 'deadline' scheduler is probably going to be the one you're
looking for.

> 
> What could be the reason that xen always favor a specific VM?

This happens with CFQ. I'm not sure if it is an intentional design
decision or not, but it works out that busier I/O tasks get priority. As
mentioned above, the 'deadline' scheduler will deliver superior results,
as it is strictly a timeslice scheduler.

> 
> I ran the above test for several more times. Between each run, I
> purged the cached data within each VM to make the I/O demand always
> the same. It is interesting that the performance gap between the two
> VM becomes smaller and smaller. After 6 runs, the performance almost
> the same. 

Yes, this sounds like it is due to CFQ usage. Try again with the
'deadline' scheduler. You can set it by adding elevator=deadline to your
kernel module line in /boot/grub/menu.lst.

> 
> Anyone has any idea? Does the VM scheduler scheduling VMs based on
> history?
> 

I/O wise it's handled by the dom0. I have some rudimentary QoS code for
blkback, but it is not yet complete enough to be merged. Also note that
if that code is merged, it's still in the dom0, not in Xen itself.

> I have already tried file-based, physical partition and LVM VMDs,
> similar results.
> 
> I am using Xen 3.3.1, CentOS 5.1, linux 2.8.18-x86_64
> Each VM has 512M, 2-VCPU not pinned. Dom0 with 512M, 8-VCPU not
> pinned.
> Host: dell poweredge 1950: 8G, two quad-core Intel xeon.
> 

William


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>