WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

[Xen-users] Raid 5+0 Problems Under Xen

To: Xen-Users List <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-users] Raid 5+0 Problems Under Xen
From: chris <tknchris@xxxxxxxxx>
Date: Sun, 21 Feb 2010 05:48:47 -0500
Delivery-date: Sun, 21 Feb 2010 02:49:55 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:date:message-id:subject :from:to:content-type; bh=AAPkkJiARfSkqpflCi+f5B1sfLg+O+JlVjds3VzJnCo=; b=Jw3kQT7IVGWv+coLmPWjbG6Xw3wRv+JRFCxu1cecot3nui7Rblu5VdQ7qaU8fZorpd UOJUtqaHsqzCJeTW/8VbheyZwA0J6Zyu0UKP54Lhrm62udjBdVrwE7D/LiVY3g6tBBxG boXTMYbSP5uTHhdjvWriwtCMgYJJr7/NxdDAw=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:date:message-id:subject:from:to:content-type; b=OimDgvwH9NYdmscBrvytZx07uyvbm4SJa7VxIfg30sNazT9gAPnbk7dfhjxBrHry2o g1q5LmOtqSrNOPygj58luv3yspvGK4gk1WqY38jgu/J4Mbi9Ln8WHKJXFwC9ptsylW48 y3zsYdyT9sUrCshb/Mt8WCa6pc8brLZ9JnSAc=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
I am experiencing a weird issue with a raid 5+0 under dom0. I am
running xen 3.2 from debian lenny which has the 2.6.26-2-xen-amd64
dom0 kernel. There are 6 1TB sata disks which are arranged in 2 sets
of 3 disk raid5's which are raid0'd together. Chunk size on all arrays
is 64k and I was able to create and sync all arrays with no issues,
then initialized lvm on the raid0 and created 2 lv's all with no
issues. I was able to install 2 guests with no apparent problems
however after 2 days I noticed errors in the guests that their disks
had bad blocks. I checked dom0 and noticed lots of messages like
these:

[305012.467758] raid0_make_request bug: can't convert block across
chunks or bigger than 64k 2385277 4

I have posted this to linux-raid mailinglist where they have indicated
that this bug is likely due to xenified kernel.

A quote from the linux-raid mailinglist:

> This looks like a bug in 'dm' or more likely xen.
> Assuming you are using a recent kernel (you didn't say), raid0 is
> receiving a request that does not fit entirely in on chunk, and
> which has more than on page in the bi_iovec.
> i.e. bi_vcnt != 1 or bi_idx != 0.
>
> As raid0 has a merge_bvec_fn, dm should not be sending bios with more than 1
> page without first cheking that the merge_bvec_fn accepts the extra page.
> But the raid0 merge_bvec_fn will reject any bio which does not fit in
> a chunk.
>
> dm-linear appears to honour the merge_bvec_fn of the underlying device
> in the implementation of its own merge_bvec_fn.  So presumably the xen client
> is not making the appropriate merge_bvec_fn call.
> I am not very familiar with xen:  how exactly are you making the logical
> volume available to xen?
> Also, what kernel are you running?
>
> NeilBrown

Unfortunately since I am running 3.2 from what I understand there are
limited dom0 options, so I am not sure if there is any advice on this
mailinglist or if I should bring this up on xen-devel. I have detailed
raid information and errors at http://pastebin.com/f6a52db74

I would appreciate any advice or input on this issue.

- chris

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>