Hi all,
I observed some strange disk performance under Xen environment.
The test bed: Dom0: 2.6.18.8, one cpu, pinned to a physical core, CFQ disk scheduler.
Xen: 3.3.1 Guest OS: 2.6.18.8, virtual disks are using physical partitions: phy:/dev/sdbX. one virtual cpu, each pinned to a separate core. Intel two quad-core xeon CPU
I did some tests on disk I/O performance when two VMs are sharing the physical host.
both of the VMs are running iozone benchmark, sequential read a 1GB file with a request size 32K. The read was issued as O_DIRECT IO skipping the domU's buffer cache.
To get a reference throughput, I issued the read from each VM individually. It turned out to be a throughput of 56MB/s-61MB/s, which is the limit of the system for 32K sequential reads.
Then I issued the read from two VMs simultaneously. It turned out to be a 22MB/s-24MB/s for each VM. Combined together is around 45MB/s for the whole system, which is pretty normal.
The most strange thing was that, in order to make sure that the above results come from pure disk performance, nothing to to with buffer cache, I did the same test again but purge the cached data in domU and dom0 before the test.
sync and echo 3 > /proc/sys/vm/drop_caches were executed both in domU and dom0. The resulted throughput of each VM increased to 56MB/s-61MB/s, which generates a system throughput around 120MB/s. I ran the test for 10 times, 9 out of 10 have this strange result. Only one test looks like the original test.
It seems impossible to me and it must has something to do with caches.
My question is, do Xen or dom0 cache any disk I/O data from the guest OS? It seems dom0's buffer cache has nothing to do with this because I already purged everything,
Any ideas?
Thanks a lot.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|