WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Xen Migration 100 nodes

Subject: Re: [Xen-users] Xen Migration 100 nodes
From: "Chris Vaughan" <supercomputer@xxxxxxxxx>
Date: Mon, 26 Jun 2006 10:06:54 -0600
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 26 Jun 2006 09:08:02 -0700
Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:subject:cc:in-reply-to:mime-version:content-type:references; b=PXrLJoF1FC04QcXzdvlKjsjvg8jdhMaeWN25w3hD7D6hN3CVBqpkX5saWfc21pKBnlAKiMJmGXdeQVC8WV+OgkRXTVlv20EOJxFtrcgHvtH9hgTPkQqR8jatt59Ntd5hN3b43VmnzzaV5eLEcl4fYlU0t2Pi5qpHpEwIRoe9gV8=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <1151333853.9077.93.camel@xxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <216ee070606231455u34076fe3lc0aa650c018d34da@xxxxxxxxxxxxxx> <87f94c370606231541n2f6aa9t9ab501863e978e13@xxxxxxxxxxxxxx> <87f94c370606260714v32ee21ceu2370c080e40a3e8b@xxxxxxxxxxxxxx> <1151333853.9077.93.camel@xxxxxxxxxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thanks for the suggestions.  At this point we haven't fully wrote our scripts.  We are still in the planning stages and have been outlining potential issues that could develop down the road.  Storage being one issue.  Our goal is to utilize our cluster to its full potential and distribute the load over the cluster.  Thus migrating the trouble areas to less utilized areas.

Thanks for the input

On 6/26/06, Tim Post <tim.post@xxxxxxxxxxxxxxx> wrote:
I have had success in the past using Xen / OpenSSI, however you want at
least two physical nics per server. I'm imagining your goal is something
like this :

http://netkinetics.net/xen-typical.pdf

What you may consider using is openQRM with the xen plugin,
www.openqrm.org, which can accomplish pretty much the same thing. Some
work is going to be needed on your part.

I have openqrm installed to a 1.5 gb file backed VBD (CentOS 4.3 guest
image) that seems to like running at 128 MB, its going into production
this week to manage a farm of 50 odd blades. It does a nice job of
bringing up dom-0 on a blank box upon boot, a second script then runs
and hones in on that node's role , dom-u setups and configurations.
Basic grid, but basic is good when you need to fix it at 3AM.

You don't want to use nfs if you're planning on migrating frequently, I
would think AoE would be the better route or iscsi. I think for
distributed sessions OpenSSI is the most (sanest) approach that does
most of your sanity checking for you. Also make sure the xen
interconnect has gig-e.

Just curious, how many of these hosts are also SSL hosts? Have the
scripts / applications been tested ok with migrating sessions? There may
also be some work to do to the scripts themselves depending on how they
deal with making temporary files and caching.

I've done this only with Xen 2.0.7. None of the domains we've set this
up for have yet to see any (real) sustained traffic such as something
like the ./ effect. So it also really depends on how much they push
(mb/sec), how you setup your bridging and the quality of the network
you're on.

You should also take into consideration the types of files being served.
For instance, if you often have people using 56k connection downloading
10+ MB files, that makes a difference, especially if using any kind of
accelerator.

HTH - good luck :)

Tim


On Mon, 2006-06-26 at 10:14 -0400, Greg Freemyer wrote:
> On 6/23/06, Greg Freemyer < greg.freemyer@xxxxxxxxx> wrote:
>         On 6/23/06, Chris Vaughan <supercomputer@xxxxxxxxx> wrote:
>
>                 Hello people of the Xeniverse,
>
>                 I am working on putting together a large xen cluster
>                 of 100 approximate nodes with more than one vm per
>                 node.  If I am to migrate these xen sessions around
>                 from node to node can these xen sessions share the
>                 same image that is NFS mounted?  Or do I need to have
>                 a different image for each xen session?  This will
>                 obviously create a lot of storage constraints.
>
>                 Anyone have any suggestions on efficient image
>                 management?
>
>                 Thanks,
>
>
>                 --
>                 ------------------------------
>                 Christopher Vaughan
>
>         I don't know the answer, but what are you trying to do?
>
>         Some thoughts:
>
>         1) Have a boot / root image that is readonly and then have
>         filesystems that you mount read/write, one (or more) per VM.
>         If you go this way, you could use a live CD like SUSE provides
>         to be your boot / root filesystem.  Obviously you would want
>         to replace the kernel with a paravirtualized one.
>
>         2) Run SSI (Single System Image) on all of the VMs in
>         read/write mode.  In theory SSI would let you do this, but I
>         would expect any disk i/o to be slow due to locking.  I think
>         I've seen some posts on their list about using xen with ssi
>         clusters.
>
>         Good Luck, and keep us informed.  I for one have thought of
>         having dozens of VMs spread across several machines, but I had
>         only thought about using dedicated virtual disks, not trying
>         to share them.
>
>
>
>         Greg
>
>
>
>         --
>         Greg Freemyer
>         The Norcross Group
>         Forensics for the 21st Century
>
> Out of curiosity I went back and looked at the SSI list to see if they
> had a xen solution.
>
> They do at least at some level of functionality (see below).  With the
> below you should be able to have shared root xen VMs thus reducing
> your disk storage requirements.
>
> If your not familiar with SSI, I believe the below setups are designed
> to have one of the xen VMs be the CFS (cluster file system) master.
> It can directly access the virtual disk below it.  The other xen VMs
> in the ssi cluster would make file i/o requests to the master.
>
> Each individual filesystem is seperately assigned a master.  So if you
> had a data filesystem per xen VM node, the data filesystems could be
> assigned to the xen vm node actually doing the work thus eliminating
> most ssi induced performance issues.
>
> And drdb is used for failover.  If you need it, you could assign each
> filesystem a backup xen vm node master.  Then if the original master
> dies, the alternate takes over.  Obviously you would want the
> alternate to be on a different physical computer than the primary
> master.
>
> Even better than drdb support would be to have a reliable shared
> storage facility on the backend.  That to is supported in ssi, but
> I've forgetten the details.
>
> Greg
> >>>
> Thanks to OpenSSI user Owen Campbell, there are now two 2.6.10 domU
> Xen
> OpenSSI kernels available for download.
>
>  == OpenSSI domU, without DRBD
>  URL: http://deb.openssi.org/contrib/vmlinuz-2.6.10-openssi-xenu
>  MD5: 8f26aa3f7efe3858692b3acdf3db4c21
>  ==
>
>  == OpenSSI domU, with DRBD
>  URL: http://deb.openssi.org/contrib/vmlinuz-2.6.10-openssi-drbd-xenu
>  MD5: 25e3688ac6e51cada1baf85004636658
>  ==
>
> VERY IMPORTANT: standard disclaimer applies. Not official, OpenSSI
> accepts no liability, might blow up your machine, and so forth.
>
> Cheers,
>
> --
> Ivan Krstic <krstic@xxxxxxxxxxxxxxx> | GPG: 0x147C722D
>
> <<<
>
> --
> Greg Freemyer
> The Norcross Group
> Forensics for the 21st Century
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-users




--
------------------------------
Christopher Vaughan
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
<Prev in Thread] Current Thread [Next in Thread>