WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] strange xen disk performance

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] strange xen disk performance
From: Jia Rao <rickenrao@xxxxxxxxx>
Date: Mon, 19 Jul 2010 16:13:39 -0400
Delivery-date: Mon, 19 Jul 2010 13:14:48 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:received:date:message-id :subject:from:to:content-type; bh=1AHIuXy7S9OVgWYv6uqi6/9apaInItI4G/Ti8bLX2cI=; b=LO+w7UjSK6ivXplr+JLt+Tf3trPVk9wnh3alKRbJMt+eLTQN/S17kvW6U3cV+MW5du LqiW1pcmpor3OUfjrO9pmDIO6X8FBb8q1ksMLFHZWy5P3Ri/Xqi0+lp5VmiH2eYJu/hI AFg36askpgaZ2YKF221X2X6GvHzWae9JqJ8tE=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=pfpoY4B1OsGaFBesFbAvXcoo9TpFrd0lpsYEZDpwyT64raRYLXR1irH4O7pOOU42i5 wBFi0n80/X6N9mjSuN2fb5lsxIws3aHd0/2Y6ZLi/z6dv9PyTffkLctpdHRkJbkal9Rg u3p4NLpGe+9bAALG+4xK/9Jj38le5bEtysmCU=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi all,

I observed some strange disk performance under Xen environment.

The test bed:
Dom0: 2.6.18.8, one cpu, pinned to a physical core, CFQ disk scheduler.
Xen: 3.3.1
Guest OS: 2.6.18.8, virtual disks are using physical partitions: phy:/dev/sdbX. one virtual cpu, each pinned to a separate core. 
Intel two quad-core xeon CPU

I did some tests on disk I/O performance when two VMs are sharing the physical host. 

both of the VMs are running iozone benchmark, sequential read a 1GB file with a request size 32K. The read was issued as O_DIRECT IO skipping the domU's buffer cache.

To get a reference throughput, I issued the read from each VM individually. It turned out to be a throughput of 56MB/s-61MB/s, which is the limit of the system for 32K sequential reads.

Then I issued the read from two VMs simultaneously. It turned out to be a 22MB/s-24MB/s for each VM. Combined together is around 45MB/s for the whole system, which is pretty normal.

The most strange thing was that, in order to make sure that the above results come from pure disk performance, nothing to to with buffer cache, I did the same test again but purge the cached data in domU and dom0 before the test.

sync and echo 3 > /proc/sys/vm/drop_caches were executed both in domU and dom0. The resulted throughput of each VM increased to 56MB/s-61MB/s, which generates a system throughput around 120MB/s. I ran the test for 10 times, 9 out of 10 have this strange result. Only one test looks like the original test.

It seems impossible to me and it must has something to do with caches.

My question is, do Xen or dom0 cache any disk I/O data from the guest OS? It seems dom0's buffer cache has nothing to do with this because I already purged everything,

Any ideas?

Thanks a lot.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-devel] strange xen disk performance, Jia Rao <=