WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: poor domU VBD performance.

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] Re: poor domU VBD performance.
From: peter bier <peter_bier@xxxxxx>
Date: Fri, 1 Apr 2005 16:36:15 +0000 (UTC)
Delivery-date: Fri, 01 Apr 2005 16:41:04 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <A95E2296287EAD4EB592B5DEEFCE0E9D1E3975@xxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Loom/3.14 (http://gmane.org/)
Ian Pratt <m+Ian.Pratt <at> cl.cam.ac.uk> writes:

> 
> > > I've checked in something along the lines of what you 
> > described into 
> > > both the 2.0-testing and the unstable trees. Looks to have 
> > identical 
> > > performance to the original simple patch, at least for a bulk 'dd'.
> > 
> > Can you post the patch here for review? Or just point me 
> > somewhere I can view it.
> 
> Jens,
> 
> Thanks for your help on this.
> 
> Here's Keirs updated patch:
> http://xen.bkbits.net:8080/xen-2.0-testing.bk/gnupatch <at> 424c1abd7LgWMiask
> LEEAAX7ffdkXQ
> 
> Which is based on this earlier patch from you:
> http://xen.bkbits.net:8080/xen-2.0-testing.bk/gnupatch <at> 424bba4091aV1FuNk
> sY_4w_z4Tvr3g
> 
> Best,
> Ian
> 
I have applied the patch in blkback.c for xen0 and have gotten good results 
now.
I have tested two systems one with a standard ide disk device and another with
two SATA disks. I stumbled over this issue when I was doing filesystem io and
wanted to check the efficiency of xen-linux. It was then that I went to raw IO
on block devices and found that it didn't perform as I hoped. 

Now I have switched back to the filesystem operations. I do this by copying a
"/usr" subtree from a slackware-10.0 installation containg about 750 MB in 
2200 directories and 37000 files. Copying these  files with target directory on
the same device as the source directory, I get between 90 and 93% of the per-
formance in Dom0, when I work with DomU. When copying form a directory on one
device into a directory of another device, performance in DomU leaks more 
behind
that of Dom0. It's only 50 to 60 percent of the Dom0 performance. The 
performance is  less than it is when using only one disk. I found out
that the sum of the business of the two disks as reported by iostat on Dom0 is
always slightly above 100%.  Does this reflect that the reading and the
writing both  go through the VDB driver ? Both devices are never 100 % busy.

Any explanations ? 

Thanks in advance 

   Peter


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel