WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Debian Etch Xen Cluster (DRBD, GNBD, OCFS2, iSCSI, Heart

To: Dominik Klein <dk@xxxxxxxxxxxxxxxx>
Subject: Re: [Xen-users] Debian Etch Xen Cluster (DRBD, GNBD, OCFS2, iSCSI, Heartbeat?)
From: Christian Horn <chorn@xxxxxxxxxxxx>
Date: Thu, 13 Sep 2007 20:06:49 +0200
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 13 Sep 2007 11:07:18 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <46E93E7C.5080207@xxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <46E6A47B.3010309@xxxxxxxx> <46E92EB0.3040209@xxxxxxxxxxxxxxxx> <AEC6C66638C05B468B556EA548C1A77D01249662@trantor> <46E93E7C.5080207@xxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.13 (2006-08-11)
On Thu, Sep 13, 2007 at 03:43:24PM +0200, Dominik Klein wrote:
> >Just remember, if something goes wrong in such a way that the domain is
> >active on both nodes at the same time with read/write access to the
> >filesystem, you *will* *destroy* the filesystem and will need to restore
> >from backup. No amount of fscking will help you.
> >
> >(I speak from bitter experience :)
> 
> Full Ack (without the bitter experience yet though).
> 
> That's why you want to have a functioning STONITH Device and redundant 
> cluster communication ways.

That will only help on clusterside.
Has anyone good ideas on how to prevent domUs that are stored on evms or
clvm from beeing started on multiple dom0s?
Setting some lock on an ldap-server, in files on a ocfs2-fs or running
through the dom0s via ssh and asking if the domU is running would be
my ideas.. is there something less errorprone?


Christian

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>