|
|
|
|
|
|
|
|
|
|
xen-devel
RE: [Xen-devel] Re: poor domU VBD performance.
#
> I am sorry to return to this issue after quite a long interruption.
> As I mentioned in a post before, I came accross this problem
> when I was testing file-system performance. After the
> problems with raw sequential I/O seemed to have been fixed in
> the testing release, I turned back to my original problem.
> I did a simple test that dispite its simplicity seems to put
> the IO subsystem under considerable stress. I took the /usr
> tree of my system and copied five it times into different
> directories on a slice of disk 1. This tree con- sistst of
> 36000 files with about 750 MB of data. Then I started to copy
> each of these copies recursively onto disk 2 ( each to its
> own location on that disk, of course ). I ran these copying
> in parallel and the processes took about 6 to 7 minutes in
> DOM0, while they needed between 14.6 and 15.9 minutes in DOMU.
>
> Essentially, this means that using this heavy io load on the
> system I get back to my 40% ratio between io performance on
> DOMU compared and io perfor- mance on DOM0 that I initially
> reported. This may just be coincidence, but probably it is
> worth mention.
It's possible that the dom0 doing prefetch as well as the domU is
messing up random IO performance. Do the iostat numbers suggest dom0 is
reading more data overall when doing it on behalf of a domU?
We'll need a simpler way of reproducing this if any headway is to be
made debugging it.
It might be worth writing a program to do psuedo-random IO reads to a
partition, both in DIRECT and normal mode, then run it in dom0 and domU.
[Chris: you have such a program already, right? Can you post it, thanks]
Ian
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|