WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] blk[front|back] does not hand over minimum and optimal_i

To: Pasi Kärkkäinen <pasik@xxxxxx>
Subject: Re: [Xen-users] blk[front|back] does not hand over minimum and optimal_io_size to domU
From: Adi Kriegisch <adi@xxxxxxxxxxxxxxx>
Date: Thu, 24 Mar 2011 14:15:01 +0100
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 24 Mar 2011 06:18:47 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20110323155033.GB32595@xxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <20110223132641.GM10906@xxxxxxxx> <20110323155033.GB32595@xxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.13 (2006-08-11)
Dear Pasi,

I am still investigating this... (and I also wrote a bug report about it
which is still waiting for an update).

> > I investigated some serious performance drop between Dom0 and DomU with
> > LVM on top of RAID6 and blkback devices.
[SNIP]
> > minimum_io_size -- if not set -- is hardware block size which seems to be
> > set to 512 in xlvbd_init_blk_queue (blkfront.c). Btw: blockdev --getbsz
> > /dev/space/test gives 4096 on Dom0 while DomU reports 512.
I recompiled the kernel with those values hardcoded. It had no direct
impact on the benchmark results. So this assumtion was wrong.

> > I can somehow mitigate the issue by using a way smaller chunk size but this
> > is IMHO just working around the issue.
Using a smaller chunk size indeed helps to improve write speeds but read
speeds are getting worse then.
Making benchmarks with different chunk sizes and different kernels is quite
time consuming; therefor I did not provide an update on that yet.

> > Is this a bug or a regression? Or does this happen to anyone using RAID6
> > (and probably RAID5 as well) and noone noticed the drop until now?
I'd be really glad if someone who is using raid5 or raid6 on Dom0 could
provide some numbers on this.

Probably this is related to the weak hardware I am using: This machine is
an Atom D525 with 4 (hyperthreaded) cores. Maybe the issue is related to
in-order/out-of-order execution or something like that?

> Did you find more info about this issue?
To sum it up: no, not yet! ;-)

Thanks for asking,
        Adi 

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>