WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] blk[front|back] does not hand over minimum and optimal_io_si

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-users] blk[front|back] does not hand over minimum and optimal_io_size to domU
From: Adi Kriegisch <adi@xxxxxxxxxxxxxxx>
Date: Wed, 23 Feb 2011 14:26:41 +0100
Delivery-date: Wed, 23 Feb 2011 05:28:24 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.13 (2006-08-11)
Dear all,

I investigated some serious performance drop between Dom0 and DomU with
LVM on top of RAID6 and blkback devices.
While I have around 130MB/s write performance in Dom0, I only get 30MB/s in
DomU. Inspecting this with dstat/iostat revealed that I have a read rate of
about 17-25MB/s while writing with aroung 40MB/s.
The reading only occurs on the disk devices assembled to the RAID6 not the
md device itself. So this is related to RAID6 activity only.
The reason for this is recalculation of checksums due to a too small
optimal_io_size:
On Dom0:
blockdev --getiomin /dev/space/test
524288 (which is chunk size)
blockdev --getioopt /dev/space/test
3145728 (which is 6*chunk size)

On DomU:
blockdev --getiomin /dev/xvdb1
512
blockdev --getioopt /dev/xvdb1
0 (so the kernel will use 1MB by default, IIRC)

minimum_io_size -- if not set -- is hardware block size which seems to be
set to 512 in xlvbd_init_blk_queue (blkfront.c). Btw: blockdev --getbsz
/dev/space/test gives 4096 on Dom0 while DomU reports 512.

I can somehow mitigate the issue by using a way smaller chunk size but this
is IMHO just working around the issue.

Is this a bug or a regression? Or does this happen to anyone using RAID6
(and probably RAID5 as well) and noone noticed the drop until now?
Is there any way to work around this issue?

Thanks,
        Adi Kriegisch

PS: I am using a stock Debian/Squeeze kernel on top of Debians Xen 4.0.1-2.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-users] blk[front|back] does not hand over minimum and optimal_io_size to domU, Adi Kriegisch <=