WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] blk[front|back] does not hand over minimum and optimal_i

To: Adi Kriegisch <adi@xxxxxxxxxxxxxxx>
Subject: Re: [Xen-users] blk[front|back] does not hand over minimum and optimal_io_size to domU
From: Pasi Kärkkäinen <pasik@xxxxxx>
Date: Wed, 23 Mar 2011 17:50:33 +0200
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 23 Mar 2011 08:52:15 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20110223132641.GM10906@xxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <20110223132641.GM10906@xxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.18 (2008-05-17)
On Wed, Feb 23, 2011 at 02:26:41PM +0100, Adi Kriegisch wrote:
> Dear all,
> 
> I investigated some serious performance drop between Dom0 and DomU with
> LVM on top of RAID6 and blkback devices.
> While I have around 130MB/s write performance in Dom0, I only get 30MB/s in
> DomU. Inspecting this with dstat/iostat revealed that I have a read rate of
> about 17-25MB/s while writing with aroung 40MB/s.
> The reading only occurs on the disk devices assembled to the RAID6 not the
> md device itself. So this is related to RAID6 activity only.
> The reason for this is recalculation of checksums due to a too small
> optimal_io_size:
> On Dom0:
> blockdev --getiomin /dev/space/test
> 524288 (which is chunk size)
> blockdev --getioopt /dev/space/test
> 3145728 (which is 6*chunk size)
> 
> On DomU:
> blockdev --getiomin /dev/xvdb1
> 512
> blockdev --getioopt /dev/xvdb1
> 0 (so the kernel will use 1MB by default, IIRC)
> 
> minimum_io_size -- if not set -- is hardware block size which seems to be
> set to 512 in xlvbd_init_blk_queue (blkfront.c). Btw: blockdev --getbsz
> /dev/space/test gives 4096 on Dom0 while DomU reports 512.
> 
> I can somehow mitigate the issue by using a way smaller chunk size but this
> is IMHO just working around the issue.
> 
> Is this a bug or a regression? Or does this happen to anyone using RAID6
> (and probably RAID5 as well) and noone noticed the drop until now?
> Is there any way to work around this issue?
> 
> Thanks,
>       Adi Kriegisch
> 
> PS: I am using a stock Debian/Squeeze kernel on top of Debians Xen 4.0.1-2.
> 

Hello,

Did you find more info about this issue?

-- Pasi


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>