WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: Re: [Xen-devel] Xen pv_ops dom0 2.6.32.13 issues

To: jeremy@xxxxxxxx
Subject: Re: Re: [Xen-devel] Xen pv_ops dom0 2.6.32.13 issues
From: greno@xxxxxxxxxxx
Date: Wed, 09 Jun 2010 18:52:40 -0500 (CDT)
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 09 Jun 2010 16:53:37 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thanks I check into using tap:aio.  I had tried once before and could not get it to work.

Here is my entry from pv-grub:
# pv-grub: tap:aio: will not work for disk, use file:
disk = [ "file:/var/lib/xen/images/CLOUD-CC-1.img,xvda,w" ]

I had difficulty getting tap:aio to work with disk.  I can't remember if that problem was just with pv-grub or with dom0 in general. This was about 6 months ago.  I guess that is no longer a problem now?


Jun 9, 2010 07:37:46 PM, jeremy@xxxxxxxx wrote:
On 06/09/2010 04:27 PM, greno@xxxxxxxxxxx wrote:
> blkbackd

Using phy: in your config file? That really isn't recommended because
it has poor integrity; the writes are buffered in dom0 so writes can be
reordered or lost on crash, and the guest filesystem can't maintain any
of its own integrity guarantees.

tap:aio: is more resilient, since the writes go directly to the device
without buffering.

That doesn't directly relate to your lockup issues, but it should
prevent filesystem corruption when they happen.

J



>
>
>
> Jun 9, 2010 07:13:23 PM, jeremy@xxxxxxxx wrote:
>
> On 06/09/2010 04:05 PM, greno@xxxxxxxxxxx wrote:
> > Jeremy,
> > The soft lockups seemed to be occurring in different systems. And I
> > could never make sense out of what was triggering them. I have not
> > mounted any file systems with "nobarriers" in guests. The guests are
> > all a single /dev/xvda. The underlying physical hardware is LVM over
> > RAID-1 arrays. I'm attaching dmesg, kern.log, and messages in case
> > these might be useful.
>
> Using what storage backend? blkback? blktap2?
>
> J
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel