WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: AW: [Xen-users] Deploying redhat clusters under Xen 3.0.2

To: Javier Guerra <javier@xxxxxxxxxxx>
Subject: Re: AW: [Xen-users] Deploying redhat clusters under Xen 3.0.2
From: "Christopher G. Stach II" <cgs@xxxxxxxxx>
Date: Mon, 21 Aug 2006 22:28:09 -0500
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 21 Aug 2006 20:28:53 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <200608210813.17818.javier@xxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <00c101c6c502$ffac0d70$3e01a8c0@athlon> <44E980E3.6060306@xxxxxxxxx> <44E9ACD8.50603@xxxxxxxxx> <200608210813.17818.javier@xxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 1.5.0.5 (X11/20060807)
Javier Guerra wrote:
On Monday 21 August 2006 7:53 am, Christopher G. Stach II wrote:
would do differently.  First, don't let dom0 manage any of the storage.
  Use one of GNBD, iSCSI, AoE, etc. and dedicated private network cards
for each domain.  I wouldn't even bother bridging vifs.  Second, use the

I see why not using vif-bridging for storage, but why not to manage storage in dom0? having there the GNBD, iSCSI, AoE, device(s) and CLVM should work, and it seems would be the most manageable setup.

would you share your (bad) experiences with Dom0-managed storage?

Oh, it works, but not very well under high I/O load. You want to dedicate that single physical processor to dom0 for that, but even then, you will end up with pcpu contention in the domUs (if you're running more vcpus than pcpus, minus the one for dom0.) Eventually, they lag so much that CMAN drops them and they get fenced. I'm sure that the credit scheduler will help, but I'm doubtful that it will help that much. It's pretty clear to me that in such cases, you really should just let dom0 manage the domUs and let the physical hardware do the work (network I/O to some block device and the other cluster nodes.) I had been running this configuration in 3.0.2 since it came out. It wasn't very happy.

Why would you want a choke between your block devices, anyway?

--
Christopher G. Stach II

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users