WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Xen and GFS

Thanks again. I was trying to avoid the mount/unmount complexity and use SAN space more efficiently by simply keeping all the xenU domains on the shared file system, but it looks as though that won't work quite like I had hoped. I'm not looking for HA per se (although this is important,) more for flexibility when load balancing the VMs running on all the blades, and safer live migrations from blade to blade. I'm a little nervous about having a LUN up on two boxes at the same time, as I've got some experience with killing file systems this way (in the test lab, anyway.)


On Apr 18, 2006, at 1:30 PM, John Madden wrote:

On Tuesday 18 April 2006 16:17, Jim Klein wrote:
The setup I have is 3 - AMD_64DP server blades w/ 4Gb RAM each,
attached to FC SAN. The thought was that I would create a GFS volume
on the SAN, mount it under Xen dom0 on all 3 blades, create all the
VBDs for my VMs on the SAN, and thus be able to easily migrate VMs
from one blade to another, without any intermediary mounts and
unmounts on the blades. I thought it made a lot of sense, but maybe
my approach is wrong.

Not necessarily wrong, but perhaps just an unnecessary layer. If your intent
is HA Xen, I would set it up like this:

1) Both machines connected to the SAN over FC
2) Both machines having visibility to the same SAN LUN(s)
3) Both machines running heartbeat with private interconnects
4) LVM lv's (from dom0) on the LUN(s) for carving up the storage for the
domU's
5) In the event of a node failure, the failback machine starts with
an "/etc/init.d/lvm start" or equivalent to prep the lv's for use. Then xend
start, etc.

For migration, you'd be doing somewhat the same thing, only you'd need a separate SAN LUN (still use LVM inside dom0) for each VBD. My understanding
is that writing is only done by one Xen stack at once (node 0 before
migration, node 1 after migration, nothing in between), so all you have to do is make that LUN available to the other Xen instance and you should be set. A cluster filesystem should only be used when more than one node must write
to the same LUN at the same time.

John



--
John Madden
Sr. UNIX Systems Engineer
Ivy Tech Community College of Indiana
jmadden@xxxxxxxxxxx


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>