WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] RE: raid10 + lvm + domU NoGo?

Am Montag, den 08.09.2008, 10:09 -0400 schrieb Ross S. W. Walker:
> henry ritzlmayr wrote:
> > 
> > Hi list, 
> > 
> > I have a fully updated CentOS 5.2 (Dom0+DomU). I wanted to do some
> > performance tests with Databases within a DomU (compare raid-levels).
> > >From the theoretical point of view raid10 should be best, but I can´t
> > even start the DomU when the lv sits on a raid10.  
> > 
> > The Error I get within Dom0 is (several of those):
> > 
> > raid10_make_request bug: can't convert block across chunks or bigger
> > than 64k 1024000507 3
> > 
> > The DomU sees the disk just as a defekt one. 
> > 
> > Changing from phy to tap:aio gets the DomU running but performancewise
> > this is not the solution I am seeking. 
> > 
> > My google vodoo brought up: 
> > 
> > http://www.issociate.de/board/post/485110/Bug(?)_in_raid10_(ra
> > id10,f2_-_lvm2_-_xen).html
> > 
> > http://www.issociate.de/board/post/423708/raid10_make_request_
> > bug:_can't_convert_block_across_chunks_or_bigger_%5B...%5D.html
> > 
> > https://bugzilla.redhat.com/show_bug.cgi?id=224077
> > 
> > https://bugzilla.redhat.com/show_bug.cgi?id=223947
> > 
> > So for me it looks like this is still a NoGo - right?
> 
> The kernel md raid10 driver is a little off.
> 
> You could try striping LVs across md RAID1 PVs.
> 
> Say you have 6 drives, create 3 MD RAID1s, convert them to PVs
> (whole disk is fine no need to partition) add them to a volume
> group then create the LVs with lvcreate -i 3 ... which will
> cause them to stripe across all 3 PVs.
> 
> This will provide identical performance to the MD RAID10 and
> should work fine with Xen.
> 
> -Ross

Hi Ross, 

thanks for the reply, I will give this a shot. Do you have any
information (link) on why the md raid10 driver is "a little off" or
any information if there are any plans to change this. I guess this 
is probably an upstream issue, but on some test systems I only have
three disks so the above solution works only for a subset of my
machines.

Henry 



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>