This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] Strange Xen Disk I/O performance among DomUs

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] Strange Xen Disk I/O performance among DomUs
From: Jia Rao <rickenrao@xxxxxxxxx>
Date: Wed, 1 Apr 2009 10:26:05 -0400
Delivery-date: Wed, 01 Apr 2009 07:27:18 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:date:message-id:subject :from:to:content-type; bh=6+ZM9cIqj7LumULx8FbUp3zr8ooXyRSZwiXiTqvlKY0=; b=Q59atTQ4YNFgVsqlxydtkhDPyoW5ATM/WsAIe0QhNIwEjeSDI8kvhSxbLID3Ooaz78 V5gHhh2bbdbaoMS5I6292u7MN/j0O2wudp8xvJOIfK+Hkd92yfP9xUqxsKOwYJOhYNwh jLBZbXb+xhLFRIPazIOnWTpDeNw3B0W09/fFk=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=rKwoLj7xNjvqXR5oV0hFyILZpM8y0RzJNvjcADgEbpu2vLQ2zxboGM+xG5YrjKlY0g Wy3FY/fAW5gGfR0DMYY744XAHFbFglNICc2ila5CAx71p5jPolTWIkMfqHwHZ8xOZ3p2 r3YR78TlQOaCtQ7Jv/bFxKqNkWDQ/CY1pfOww=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi all,

I tested the xen vm disk I/O this weekend and had some interesting observations:

I ran TPC-C benchmarks (mostly random small disk read) within two VMs (PV) with the exactly the same resource and software configuration. I started the benchmarks in the two VMs at the same time (started with a script, the time difference is within several ms). The Xen VM scheduler seems always favor one VM, which results in a 50% better performance over the other VM.  I changed the seqence of the VM creation and application starting order, the specific VM always got better performance, 30%-50% better.

What could be the reason that xen always favor a specific VM?

I ran the above test for several more times. Between each run, I purged the cached data within each VM to make the I/O demand always the same. It is interesting that the performance gap between the two VM becomes smaller and smaller. After 6 runs, the performance almost the same.

Anyone has any idea? Does the VM scheduler scheduling VMs based on history?

I have already tried file-based, physical partition and LVM VMDs, similar results.

I am using Xen 3.3.1, CentOS 5.1, linux 2.8.18-x86_64
Each VM has 512M, 2-VCPU not pinned. Dom0 with 512M, 8-VCPU not pinned.
Host: dell poweredge 1950: 8G, two quad-core Intel xeon.

Thanks in advance,
Xen-devel mailing list
<Prev in Thread] Current Thread [Next in Thread>