|
|
|
|
|
|
|
|
|
|
xen-users
RE: [Xen-users] Big I/O performance difference between dom0 and domU
> -----Original Message-----
> From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
> [mailto:xen-users-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of
> Marcin Owsiany
> Sent: 18 April 2007 10:48
> To: xen-users@xxxxxxxxxxxxxxxxxxx
> Cc: Liang Yang
> Subject: Re: [Xen-users] Big I/O performance difference
> between dom0 and domU
>
> On Tue, Apr 17, 2007 at 05:57:02PM +0100, Marcin Owsiany wrote:
> > On Tue, Apr 17, 2007 at 09:34:10AM -0700, Liang Yang wrote:
> > > What is the CPU utilization when you did this I/O
> performance measurement?
> >
> > "% CPU" reported by bonnie when writing/reading/random seeking in is
> > 6/3/0% in dom0 and 4/2/0% in domU. bonnie was the only thing running
> > when performing the test.
> >
> > > As far I can remember, Bonnie++ does not support using
> outstanding I/Os.
> > > So your target RAID volume may not be saturated, i.e. you
> did not get the
> > > peak performance.
> >
> > What other benchmarking tool would you suggest?
>
I took your data, and reformated a little bit. Percentages at the end of
line is difference between Dom0 and DomU. It seems to me that reads
aren't too bad as long as you keep a little bit of asynchronicity.
Writes are worse, which isn't entirely surprising.
Random operations are much better, presumably because the overhead is
now more hidden behind the much larger overhead of the disk-seek
operation.
The only way to avoid having SOME overhead between the DomU and Dom0
file-access would be to have a dedicated IO-device for each DomU, which
becomes a bit unpractical if you have many DomU's that are all
IO-dependant. But if you have for example one DomU that is a data-base
server, that may be the right solution.
Note also that just because it uses threads doesn't necessarily mean
that the IO requests are saturating the bus, so the latency added by the
DomU to Dom0 may have more to do with the overall performance, which in
an application that actually does some REAL work between each IO request
will be somewhat less (if it's using some suitable prefetching
operations, of course).
Sequential Reads
Num Dom0 DomU
Thr Rate Rate
--- ------ ------
1 59.41 47.22 20.52%
2 55.44 50.51 8.89%
4 54.86 34.17 37.71%
8 45.94 30.68 33.22%
8.89%
Random Reads
Num Dom0 DomU
Thr Rate Rate
--- ------ ------
1 0.94 1.39 -47.87%
2 1.61 1.42 11.80%
4 2.29 2.86 -24.89%
8 2.72 4.08 -50.00%
-50.00%
Sequential Writes
Num Dom0 DomU
Thr Rate Rate
--- ------ ------
1 10.48 6.98 33.40%
2 10.08 6 40.48%
4 9.74 5.88 39.63%
8 9.56 5.27 44.87%
33.40%
Random Writes
Num Dom0 DomU
Thr Rate Rate
--- ------ ------
1 1.23 1.01 17.89%
2 1.24 1.03 16.94%
4 1.22 0.98 19.67%
8 1.12 0.93 16.96%
16.94%
--
Mats
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|
|
|
|
|