WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Distributed xen or cluster?

To: xen-users <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-users] Distributed xen or cluster?
From: "lists@xxxxxxxxxxxx" <lists@xxxxxxxxxxxx>
Date: Tue, 20 Jan 2009 23:51:54 -0600
Delivery-date: Tue, 20 Jan 2009 21:52:39 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <49769785.7000403@xxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Reply-to: lists@xxxxxxxxxxxx
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
> Ah, so you're familiar with GFS and LVS. From your earlier post I'm not
> sure whether you're a newbie or someone experienced :)

I'm always a newbie :). What I mean is that I take on various new technologies 
but never have the time to become very proficient with any one of them. I try 
to learn them well enough to be able to put them to use, then over time, I try 
to learn more so that I can fine tune, etc.

I guess my newness to xen is showing in this case also and of course, anyone 
who is new to anything will need to eventually become aware of proper 
terminology etc. I think that's probably the biggest giveaway.

Anyhow, yes, I had been using GFS for about 3 years I think now. I slowly 
started going more towards filer based NFS because the fencing issues were 
becoming rather frustrating.
 
> on a shared storage or db, you get a "redundant" setup, as in you can
> conect to any server and will get the same session. There's still
> possibility of a failure though: client won't retry the request if data
> transfer is interrupted in the middle.

Right, this is somewhat simple because we're not talking an entire operating 
system needing to be redundant.

> - NFS can handle server failover better than HTTP. NFS over TCP will
> automatically reconnect if disconnected, and retry a failed request.
> This setup still has a possible problem : If an NFS TCP client is moved
> from one server to another it will work, but when moved back again to
> the first server in a short time (say several minutes) it will not work.
> To handle this particular issue you can use NFS over UDP.

> So you want to achive the same level of "redundandy" with VM/domUs as
> you would with (from my example above) HTTP or NFS? Then the answer is
> it's not possible.

I would guess I'm not alone in this thinking. I think being able to create 
redundant virtual environments would be the ultimate in the near future. I hope 
this is already in the works.

So until then, basically, all we have is still pretty good, taking advantage of 
moving the cluster to the virtual servers as guests is still a pretty good 
positive, but it would/will be nice to see full redundancy in virtualization. 
That's when things will be incredibly powerful in networking computing.
 
> An exeption is when the VM is using cluster FS
> like GFS, but that's another story.

I was thinking about this but thought that it might not work out well depending 
on how it was set up. One could have say a GFS share for all of the VM servers, 
then each VM server could have it's own local storage to cut down on network 
storage I/O use. The guests would run local, though, they could have network 
storage as well.

Either way, it was this thinking that lead me to hoping that there might be 
some way of having redundancy for the VM servers as well.
 
> What IS possible, though, is LIVE migration. For this to work :
> - backend storage for domU is stored in shared block device (SAN LUN,

Speaking of this, while I understand that this is not redundancy, it would be 
interesting to know how quickly such a migration could occur as this sounds 
like the immediate solution at least.

Mike


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users