WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] blk[front|back] does not hand over disk parameters

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] blk[front|back] does not hand over disk parameters
From: Adi Kriegisch <adi@xxxxxxxxxxxxxxx>
Date: Fri, 25 Feb 2011 17:43:52 +0100
Delivery-date: Sat, 26 Feb 2011 13:31:14 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.13 (2006-08-11)
Dear all,

(following the XenFAQ on how to report a bug[1], I submitted this to
xen-user list[2] first, reported the bug in bugzilla[3] and now resend the
text to this list. Please CC me in replys as I am not subscribed to this
list, Thanks!)

I investigated some serious performance drop between Dom0 and DomU with
LVM on top of RAID6 and blkback devices.
While I have around 130MB/s write performance in Dom0, I only get 30MB/s in
DomU. Inspecting this with dstat/iostat revealed that I have a read rate of
about 17-25MB/s while writing with aroung 40MB/s.
The reading only occurs on the disk devices assembled to the RAID6 not the
md device itself. So this is related to RAID6 activity only.
The reason for this is recalculation of checksums due to a too small
optimal_io_size:
On Dom0:
blockdev --getiomin /dev/space/test
524288 (which is chunk size)
blockdev --getioopt /dev/space/test
3145728 (which is 6*chunk size)

On DomU:
blockdev --getiomin /dev/xvdb1
512
blockdev --getioopt /dev/xvdb1
0 (so the kernel will use 1MB by default, IIRC)

minimum_io_size -- if not set -- is hardware block size which seems to be
set to 512 in xlvbd_init_blk_queue (blkfront.c). Btw: blockdev --getbsz
/dev/space/test gives 4096 on Dom0 while DomU reports 512.

I can somehow mitigate the issue by using a way smaller chunk size but this
is IMHO just working around the issue. Another workaround could be to use a
"power-of-two" number of data disks in the raid and choose the chunk size
to sum up to 1MB. But this is just another hack...

If there is anything I can do, please let me know!

Thanks,
        Adi Kriegisch

PS: I am using a stock Debian/Squeeze kernel on top of Debians Xen 4.0.1-2.

[1] http://wiki.xensource.com/xenwiki/XenFaq
[2] http://lists.xensource.com/archives/html/xen-users/2011-02/msg00615.html
[3] http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=1745

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel