|
|
|
|
|
|
|
|
|
|
xen-devel
Re: Performance data of Linux native vs. Xen Dom0 and Xen DomU. Re: [Xen
Hi,
I did not use any file backed block devices. To get the real disk I/O
performance for guest domains, all the guest domain OS are installed on the
separate logic volumes (each guest domain is installed on independent hard
drives).
I tried to change the I/O scheduler for both guest domain (noop is the
default) and domain0 to anticipatory, but the results are the same as I
collected before.
Regards,
Liang
----- Original Message -----
From: "Christopher G. Stach II" <cgs@xxxxxxxxx>
To: "George Dunlap" <gdunlap@xxxxxxxxxxxxx>
Cc: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>; "Emmanuel Ackaouy"
<ack@xxxxxxxxxxxxx>; "Liang Yang" <multisyncfe991@xxxxxxxxxxx>; "xen-devel"
<xen-devel@xxxxxxxxxxxxxxxxxxx>; "John Byrne" <john.l.byrne@xxxxxx>
Sent: Monday, November 13, 2006 8:49 AM
Subject: Re: Performance data of Linux native vs. Xen Dom0 and Xen DomU. Re:
[Xen-devel] Direct I/O to domU seeing a 30% performance hit
George Dunlap wrote:
One of the strange things, though, is that the difference should be so
big between domU and dom0, which are using the exact same same kernel
(correct me if I'm wrong).
I'm not familiar with the I/O scheduling. Is it possible that the I/O
scheduling inside the domU is interacting poorly with the I/O
scheduling in dom0? That's one hypothesis for why domU writes are
slower than dom0 writes; but that doesn't seem to explain why domU
reads would be *faster* than dom0 reads.
-George
Correct me if I'm wrong, but wouldn't a file-backed block device for the
domU be reading from memory if it's cached in the dom0? That would be a
bit faster.
Also, in the past it seemed the domU schedulers were ignored and were
implicitly noop, relying on the dom0's scheduler (which did all of the
real I/O.) Has this changed?
--
Christopher G. Stach II
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|