Hi,
On Wed, 2009-04-01 at 10:26 -0400, Jia Rao wrote:
> Hi all,
>
> I tested the xen vm disk I/O this weekend and had some interesting
> observations:
>
> I ran TPC-C benchmarks (mostly random small disk read) within two VMs
> (PV) with the exactly the same resource and software configuration. I
> started the benchmarks in the two VMs at the same time (started with a
> script, the time difference is within several ms). The Xen VM
> scheduler seems always favor one VM, which results in a 50% better
> performance over the other VM. I changed the seqence of the VM
> creation and application starting order, the specific VM always got
> better performance, 30%-50% better.
Xen does not actually handle the scheduling of disk I/O at the moment;
it leaves that up to the dom0. Which scheduler are you using in the
dom0? The 'deadline' scheduler is probably going to be the one you're
looking for.
>
> What could be the reason that xen always favor a specific VM?
This happens with CFQ. I'm not sure if it is an intentional design
decision or not, but it works out that busier I/O tasks get priority. As
mentioned above, the 'deadline' scheduler will deliver superior results,
as it is strictly a timeslice scheduler.
>
> I ran the above test for several more times. Between each run, I
> purged the cached data within each VM to make the I/O demand always
> the same. It is interesting that the performance gap between the two
> VM becomes smaller and smaller. After 6 runs, the performance almost
> the same.
Yes, this sounds like it is due to CFQ usage. Try again with the
'deadline' scheduler. You can set it by adding elevator=deadline to your
kernel module line in /boot/grub/menu.lst.
>
> Anyone has any idea? Does the VM scheduler scheduling VMs based on
> history?
>
I/O wise it's handled by the dom0. I have some rudimentary QoS code for
blkback, but it is not yet complete enough to be merged. Also note that
if that code is merged, it's still in the dom0, not in Xen itself.
> I have already tried file-based, physical partition and LVM VMDs,
> similar results.
>
> I am using Xen 3.3.1, CentOS 5.1, linux 2.8.18-x86_64
> Each VM has 512M, 2-VCPU not pinned. Dom0 with 512M, 8-VCPU not
> pinned.
> Host: dell poweredge 1950: 8G, two quad-core Intel xeon.
>
William
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|