WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] small cluster storage configuration?

To: "xen-users@xxxxxxxxxxxxxxxxxxx" <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-users] small cluster storage configuration?
From: Miles Fidelman <mfidelman@xxxxxxxxxxxxxxxx>
Date: Mon, 10 Oct 2011 09:46:48 -0400
Delivery-date: Mon, 10 Oct 2011 06:47:45 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4E92F0A2.3070206@xxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <4E921F8A.5070406@xxxxxxxxxxxxxxxx> <4E92F0A2.3070206@xxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:7.0.1) Gecko/20110928 Firefox/7.0.1 SeaMonkey/2.4.1
John Madden wrote:
I now have 2 new servers - each with a lot more memory, faster CPUs (and
more cores), also 4 drives each.  So I'm wondering what's my best option
for wiring the 4 machines together as a platform to run VMs on.

A few things come to mind:

- Since these two new boxes are so much better, do you really need all four systems for hosting VM's? If you can do everything on the two old systems, you can do it all on the new systems. If you just keep it to two nodes, you eliminate some complexity.

I'm mixing development with production, and working on some server-side software intended to work across large numbers of nodes - so I figure 4 machines gives us some more flexibility. I'm also thinking of keeping some of our current production stuff (mostly mailing lists) on the old systems - but setting things up to make it easier to migrate later. (But, yes, I have thought of it :-)

- Ever worked with GlusterFS? It'll allow you to stripe and replicate across multiple nodes.

That's pretty much the only thing that's jumping out, as I peruse the net looking for relatively mature solutions. The one thing that looks a little closer is Sheepdog - but it's KVM-only and not all that mature.

One thing I've been wondering about - can't find in the documentation - and guess I might just have to start experimenting - is what GlusterFS does regarding disks on the same node, vs. disks on different nodes. Things like:

- whether or not to run RAID on each node, as well as configuring GlusterFS to stripe/replicate across nodes (i.e., with a total of 16 drives, but split 4-per-node, will Gluster replicate/stripe so that a node failure won't kill you)

- what happens to performance if you stripe/replicate across nodes?

Have you (or anybody) had much experience with GlusterFS in practice? Particularly on a relatively small cluster? Comments? Suggestions?

Thought of these....
- If I/O isn't too much of an issue, you could use one pair with DRDB as the storage node and export to the other two over NFS/etc.

Thought about this, but it would leave half my disk space idle.

- Export everything over iSCSI, use md to mirror, name everything carefully so you know which nodes you can take down based on where VM's are. Complicated, but workable. I suppose you could set this up with DRDB too.

Been thinking about this one too. The complexity really scares me. Any thoughts re. tools that might simplify things, and/or performance implications of using md to mirror across iSCSI mounts on different machines?

Thanks again to all,

Miles Fidelman




--
In theory, there is no difference between theory and practice.
In<fnord>  practice, there is.   .... Yogi Berra



_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users