WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] poor domU VBD performance.

To: "Andrew Theurer" <habanero@xxxxxxxxxx>, "Peter Bier" <peter_bier@xxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] poor domU VBD performance.
From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
Date: Mon, 28 Mar 2005 21:14:10 +0100
Delivery-date: Mon, 28 Mar 2005 20:14:23 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcUzzREq1oOk22coSFqjPCGTHZL/8QAAcqXA
Thread-topic: [Xen-devel] poor domU VBD performance.
> > > > I found out that dom0 does file-system IO and raw IO ( using
> > > > dd as a tool to test
> > > > throughput from the disk ) is about exactly the same as when
> > > > using a standard
> > > > linux kernel without XEN. But the raw IO from DomU to an
> > > > unused disk ( a second
> > > > disk in the system ) is limited to fourty percent of the
> > > > speed I get within Dom0.
> 
> Is the second disk exactly the same as the first one?  I'll 
> try an IO test 
> here on the same disk array with dom0 and domU and see what I get.

I've reproduced the problem and its a real issue. 

It only affects reads, and is almost certainly down to how the blkback
driver passes requests down to the actual device.

Does anyone on the list actually understand the changes made to linux
block IO between 2.4 and 2.6?

In the 2.6 blkfront there is no run_task_queue() to flush requests to
the lower layer, and we use submit_bio() instead of 2.4's
generic_make_request(). It looks like this is happening syncronously
rather than queueing multiple requests. What should we be doing to cause
things to be batched?

Thanks,
Ian

 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel