WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] bad write performance with qdisk with larger files in pv

To: Ronny Hegewald <ronny.hegewald@xxxxxxxxx>
Subject: Re: [Xen-devel] bad write performance with qdisk with larger files in pv-domU
From: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Date: Fri, 3 Jun 2011 19:16:39 -0400
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 03 Jun 2011 16:18:29 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <201106032349.47095.ronny.hegewald@xxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <201106032349.47095.ronny.hegewald@xxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.21 (2010-09-15)
On Fri, Jun 03, 2011 at 11:49:47PM +0000, Ronny Hegewald wrote:
> Im using the following 32-bit setup:
> 
> - xen 4.1.0 
> - upstream linux-kernel 2.6.39 as dom0
> - linux 2.6.32 pv-domU that has several ext3 partitions mounted with qdisk 
> (same behaviour with a 2.6.39 kernel, so i continued the investigation with 
> the 2.6.32er kernel)
> 
> Die read performance is good (ca. 60 MB/s)
> 
> For smaller files (< 30-40 MB) the write-speed is ok.
> 
> But if i copy a larger file (ca > 40 MB), the write speed decreases to ca. 
> 0,5 
> MB/s, after the first ca. 40 MBs are written.
> 
> One reason for the bad performance might be that qdisk doesnt use AIO. For 
> testing purposes i activated AIO in hw/xen_disk.c (i set use_aio=1), but the 
> domU freezed shortly after the domU-kernel started.

You could also use this patch:
http://darnok.org/xen/qdisk_vs_blkback_v3.1/qemu-enable-aio.patch

But why not use the 3.0-rc1 with the xen-blkback? Or if you want to use 2.6.39
you could use the 

git://git.kernel.org/pub/scm/linux/kernel/git/konrad/xen.git stable/2.6.39.x
tree


> 
> Is this performance-impact expected when no AIO is used? 

Yeah, it is slow.
> 
> I compared the raw-block implementation from xen-qemu 4.1.0 and current 
> upstream, in case xen-qemu has some missing bugfixes and found the following 
> patch that looks a bit interesting
> 
>       commit 4899d10d142e97eea8f64141a3507b2ee1a64f52
>       Author: Stefan Hajnoczi <stefanha@xxxxxxxxxxxxxxxxxx>
>       Date:   Mon Apr 19 13:34:11 2010 +0100
>       raw-posix: Use pread/pwrite instead of lseek+read/write
> 
>       This patch combines the lseek+read/write calls to use pread/pwrite
>       instead.  This will result in fewer system calls and is already used by
>       AIO.
> 
> 
> From the first look the patch cannot be backported 1:1, so i havent tried it 
> yet, because i doubt that it can make such a huge difference. Or would it be 
> worth a try?
> 
> Any other ideas how/what to investigate this issue further, in case the write-
> speed should be better also without AIO? I know that the qdisk implementation 
> is expected to be slower, but i would expect at least lets say 5 MB/s.
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>