|
|
|
|
|
|
|
|
|
|
xen-users
Re: [Xen-users] Xen Migration 100 nodes
On 6/23/06, Greg Freemyer <greg.freemyer@xxxxxxxxx> wrote:
Hello people of the Xeniverse,
I am working on putting together a large xen cluster of 100 approximate nodes with more than one vm per node. If I am to migrate these xen sessions around from node to node can these xen sessions share the same image that is NFS mounted? Or do I need to have a different image for each xen session? This will obviously create a lot of storage constraints.
Anyone have any suggestions on efficient image management?
Thanks,
-- ------------------------------ Christopher Vaughan
I don't know the answer, but what are you trying to do?
Some thoughts:
1) Have a boot / root image that is readonly and then have filesystems that you mount read/write, one (or more) per VM. If you go this way, you could use a live CD like SUSE provides to be your boot / root filesystem. Obviously you would want to replace the kernel with a paravirtualized one.
2) Run SSI (Single System Image) on all of the VMs in read/write mode. In theory SSI would let you do this, but I would expect any disk i/o to be slow due to locking. I think I've seen some posts on their list about using xen with ssi clusters.
Good Luck, and keep us informed. I for one have thought of having dozens of VMs spread across several machines, but I had only thought about using dedicated virtual disks, not trying to share them.
Greg
-- Greg Freemyer The Norcross Group Forensics for the 21st Century
Out of curiosity I went back and looked at the SSI list to see if they had a xen solution.
They do at least at some level of functionality (see below). With the below you should be able to have shared root xen VMs thus reducing your disk storage requirements.
If your not familiar with SSI, I believe the below setups are designed to have one of the xen VMs be the CFS (cluster file system) master. It can directly access the virtual disk below it. The other xen VMs in the ssi cluster would make file i/o requests to the master.
Each individual filesystem is seperately assigned a master. So if you had a data filesystem per xen VM node, the data filesystems could be assigned to the xen vm node actually doing the work thus eliminating most ssi induced performance issues.
And drdb is used for failover. If you need it, you could assign each filesystem a backup xen vm node master. Then if the original master dies, the alternate takes over. Obviously you would want the alternate to be on a different physical computer than the primary master.
Even better than drdb support would be to have a reliable shared storage facility on the backend. That to is supported in ssi, but I've forgetten the details.
Greg >>> Thanks to OpenSSI user Owen Campbell, there are now two
2.6.10 domU Xen OpenSSI kernels available for download.
== OpenSSI domU, without DRBD URL:
http://deb.openssi.org/contrib/vmlinuz-2.6.10-openssi-xenu MD5: 8f26aa3f7efe3858692b3acdf3db4c21 <<<
-- Greg Freemyer The Norcross Group Forensics for the 21st Century
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|
|
|
|
|