WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Re: Alternatives to cman+clvmd ?

To: Ferenc Wagner <wferi@xxxxxxx>
Subject: Re: [Xen-users] Re: Alternatives to cman+clvmd ?
From: Christopher Smith <csmith@xxxxxxxxxxxxxxxx>
Date: Thu, 19 Mar 2009 18:52:22 +0100
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 19 Mar 2009 10:53:09 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <871vt274n2.fsf@xxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <49B8EE9A.9060808@xxxxxxxxxxxxxxxx> <871vt274n2.fsf@xxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.19 (Windows/20081209)
Ferenc Wagner wrote:
Christopher Smith <csmith@xxxxxxxxxxxxxxxx> writes:

I am "managing" the shared storage [...] using cman+clvmd [...]
However, this combination seems to be horribly unreliable.  Any
network hiccup more than a lost ping or two results in cman losing
contact with the rest of the machines, which it frequently does not
regain.

This isn't inherent to cman, I've been using it for years without much
problem.  You have to handle fencing correctly, otherwise it will bite
you, no matter what timeouts you configure.  But otherwise, it's OK.

Well, I use manual fencing...

I realise this is less than ideal, but it's far more ideal to me than a xen host getting fenced and taking out twenty VMs. Especially given how frequently it seems to get to the point where it would have fenced something.

For example, failing one of the bonded NICs usually takes few
seconds for everything to 'stabilise' again on the network, but in
that time cman has lost contact with all the other nodes and often
killed itself (or bits of itself) in the process.

This shouldn't happen.  Either you misconfigured your bonding (though
a few seconds failover time doesn't sound gross), or more likely you
misconfigured cman: it's defaults aren't this strict, even.

This was actually a problem on the switches, something to do with STP - the bonding failovers were actually taking a bit over 30 seconds, so that's probably why cman was falling over.

Having said all this, managing the cluster infrastructure for so
little (clvm only) feels excessive indeed.  But I don't know any
better way (other than doing volume management on the storage side).

And volume management on the storage side brings along its own set of complications. :(

--
Christopher Smith

UNIX Team Leader
Nighthawk Radiology Services
Limmatquai 4, 6th Floor
8001 Zurich, Switzerland
http://www.nighthawkrad.net
Sydney Fax:    +61 2 8211 2333
Zurich Fax:    +41 43 497 3301
USA Toll free:  866 241 6635

Email:         csmith@xxxxxxxxxxxxxxxx
IP Extension:  8163
Sydney Phone:  +61 2 8211 2363
Sydney Mobile: +61 4 0739 7563
Zurich Phone:  +41 44 267 3363
Zurich Mobile: +41 79 550 2715

All phones forwarded to my current location, however, please consider the local time in Zurich before calling from abroad.


CONFIDENTIALITY NOTICE: This email, including any attachments, contains information from NightHawk Radiology Services, which may be confidential or privileged. The information is intended to be for the use of the individual or entity named above. If you are not the intended recipient, be aware that any disclosure, copying, distribution or use of the contents of this information is prohibited. If you have received this email in error, please notify NightHawk Radiology Services immediately by forwarding message to postmaster@xxxxxxxxxxxxxxxx and destroy all electronic and hard copies of the communication, including attachments.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>