WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] small cluster storage configuration?

Subject: Re: [Xen-users] small cluster storage configuration?
From: Miles Fidelman <mfidelman@xxxxxxxxxxxxxxxx>
Date: Mon, 10 Oct 2011 16:35:41 -0400
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 10 Oct 2011 13:36:37 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4E9352DF.708@xxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <4E921F8A.5070406@xxxxxxxxxxxxxxxx> <4E929DC4.6020304@xxxxxxxxxx> <AEC6C66638C05B468B556EA548C1A77D01E5E49C@trantor> <4E92F0AF.8070006@xxxxxxxxxxxxxxxx> <4E9352DF.708@xxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:7.0.1) Gecko/20110928 Firefox/7.0.1 SeaMonkey/2.4.1
Bart Coninckx wrote:
On 10/10/11 15:18, Miles Fidelman wrote:
James Harper wrote:
Bart Coninckx wrote:

DRBD does not do 4 nodes. If you split in two clusters you cannot
cross migrate, unless you set up the storage on each node in some
way, bu thow are you going to replicate?


Well, yes, ... that's sort of why I'm asking the question :-)

The question implies that I don't see a possibility for this.

That's why I'm trying to avoid using DRBD - looking for an alternative that will replicate data across all four nodes, and allow for continued operation if one (or possibly two) node(s) fail. It looks like GlusterFS, VastSky, and Sheepdog would do this but development on VastSky seems to have stalled, and Sheepdog is KVM-only.

Alternate might be to mount everything via iSCSI or AoE, run md Raid10 across the mess
of drives, or some such.
Which leads to two follow-up questions:

1. Can I assume that you're both suggesting a 4-node storage cluster,
and a 4-node VM cluster - running on the same 4 computers? If so, that's
sort of what I'm aiming for.

No, we're suggesting two 2-node clusters, one for storage, one for virtualization.
Ok... that's what I'm trying to avoid - mostly because that would make half my drives unavailable.
2. What software are you running for your storage cluster?

I'm running IET. Next project I would try AoE though.

Running anything on top of that in the way of a cluster file system?

Thanks,

Miles

--
In theory, there is no difference between theory and practice.
In<fnord>  practice, there is.   .... Yogi Berra



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users