WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] Re: poor domU VBD performance.

To: "peter bier" <peter_bier@xxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] Re: poor domU VBD performance.
From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
Date: Tue, 12 Apr 2005 11:51:45 +0100
Delivery-date: Tue, 12 Apr 2005 10:51:42 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcU/SyaoPE7NSRhqTDqjpW+uHhblLQAAa1/g
Thread-topic: [Xen-devel] Re: poor domU VBD performance.
 
#
> I am sorry to return to this issue after quite a long interruption. 
> As I mentioned in a post before, I came accross this problem 
> when I was testing file-system performance. After the 
> problems with raw sequential I/O seemed to have been fixed in 
> the testing release, I turned back to my original problem. 
> I did a simple test that dispite its simplicity seems to put 
> the IO subsystem under considerable stress. I took the /usr 
> tree of my system and copied five it times into different 
> directories on a slice of disk 1. This tree con- sistst of 
> 36000 files with about 750 MB of data. Then I started to copy 
> each of these copies recursively onto disk 2 ( each to its 
> own location on that disk, of course ). I ran these copying 
> in parallel and the processes took about 6 to 7 minutes in 
> DOM0, while they needed between 14.6 and 15.9 minutes in DOMU. 
> 
> Essentially, this means that using this heavy io load on the 
> system I get back to my 40% ratio between io performance on 
> DOMU compared and io perfor- mance on DOM0 that I initially 
> reported. This may just be coincidence, but probably it is 
> worth mention. 

It's possible that the dom0 doing prefetch as well as the domU is
messing up random IO performance. Do the iostat numbers suggest dom0 is
reading more data overall when doing it on behalf of a domU?

We'll need a simpler way of reproducing this if any headway is to be
made debugging it.

It might be worth writing a program to do psuedo-random IO reads to a
partition, both in DIRECT and normal mode, then run it in dom0 and domU.

[Chris: you have such a program already, right? Can you post it, thanks]

Ian

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel