The best solution so far that I have found is to have 1 (or more) NFS
servers on a backside network that provide disk IO to the individual
XenLinux unprivileged domains. This allows me to restart a xenlinux
instance on a different machine should the primary fail. This also
allows me to have a more centralized filesystem that uses nfs/iSCSI on
LVM on raid-5 for reliability.
I'm far from having a working automated system right now. I still have
the iSCSI death issues to deal with when combining 2 iSCSI target
devices into a raid-1 array in Domain-0 to be exported to the XenLinux
unprived domain. 8-( NFS roots still periodically "lock up" momentarilly
(~2 to 10 seconds) when using nfs root. SO the solution isn't ready yet.
I do highly reccomend that people use LVM LVs for individual exportable
block devices and to forget about using COWs to "save space". Adding a
200GB disk is dirt cheap (unless you are talking about scsi) so it's not
worth the savings IMHO to multi-cow a filesystem into 2+ unprived
domains on a single machine. The 1 to 2 GB of saved space just isn't
worth it at $0.10 per GB of disk space. :) Add in the system overhead
and seek times required for COWs after your first FS upgrade and your
return is worse than using separate partitions/LVs in my experience.
It's better performance and maintainability wise IMHO to use a common
RO partition for your basic OS root fs and /usr and use the traditional
/etc, /var, /usr/local redirection to a tiny config partition and unique
/home partition for per-vhost files. This is only worth it IF you can't
afford the ~2GB for independant root and /usr filesystems.
For anyone using Debian Sarge for your Domain-0 (and unprived domains) I
have a repo that's updated when I see a "stable" snapshot go by. It can
be found at http://www.terrabox.com/debian. It's a simple debian
repository of binary and source. I'm still working on getting the
packaging refined and approved by my new-DM sponsor so that the
snapshots will make it into debian testing distro in the next month or
two.
I'm working on a first draft of a Debian on xen and LVM howto. But like
Ian and group, dev work comes first till it doesn't flake out on me. ;)
If anyone is interested in a working HA solution using Xen for their
business, I can asist with design and deployment for a small donation to
my living expences fund. ;)
Brian Wolfe
TerraBox.com Inc.
linux and HA Contracting.
On Mon, 2004-09-27 at 16:53, Mark A. Williamson wrote:
> > Actually, I'd also be interested in any docs or information relating to LVM
> > with COW as an alternative to loopback images. If I want to be able to
> > shift my VM images about, can I do it with LVM?
>
> Shift them about between machines? The ideal thing to do for that would be
> to
> keep a copy of the base image on all machines and just copy the changes to
> that around your cluster when migrating. I don't know how easy this is with
> LVM... Anyone?
>
> > The section on this in the
> > manual (5.2) is a little sparse on this subject to say the least! Guess
> > you're all too busy coding :o)
>
> Yeah :-) This manual might get a bit fleshed out before the 2.0 release but
> I
> was rather hoping someone who uses LVM would help out with the LVM
> section ;-) I just use dedicated partitions for domains.
>
> Cheers,
> Mark
>
> >
> > Cheers,
> > Paul
> >
> > On Tuesday 28 September 2004 07:23 am, Paul Dorman wrote:
> > > Hi all,
> > >
> > > I wonder if anyone on the list has written any scripts to automate the
> > > management of VMs with loopback images. Here's what I want to be able to
> > > do:
> > >
> > > * Store existing physical machine file systems, or pristine installs in
> > > loopback images on my Xen servers (something I'll do manually)
> > >
> > > * Run a script that will start a VM from one of these images,
> > > automatically associate it with a loopback device, give it a name, RAM
> > > allocation, network addresses, and set various internal parameters, such
> > > as hostname, routes, etc., based on a set of arguments. So something like
> > > "script <imagename> <hostname> <netconfigs> <RAM> <.. etc.
> > >
> > > * Have the same script take another argument that will cause it to clone
> > > a filesystem image first before starting the VM, so that I can use a set
> > > of images as VM templates. I intend to have a large collection of
> > > templates which my developers can use to create VMs suited to whatever
> > > project they are working on.
> > >
> > > * After a VM machine has been instantiated, I would like to be able to
> > > start and stop it with simple "start hostname" and "stop hostname" kinds
> > > of commands.
> > >
> > > * Have management tools so that I can for example shift a VM from one Xen
> > > server to another (shift hostname xenservername). These would also be
> > > used by load balancing scripts to shift machines around to manage
> > > resources.
> > >
> > > I'd like to build a web-based management system for these scripts, so
> > > that developers are free to create and control Xen VMs (though naturally
> > > with limitations based on what the servers can handle -- so my bosses
> > > will know when they need to buy me more servers :o) ).
> > >
> > > I don't see these as particularly difficult, but if someone has done them
> > > already .... Also, I'd appreciate any thoughts you might have on
> > > automation of this kind, particularly in terms of functionality and
> > > practicalities.
> > >
> > > Thanks for your time!
> > >
> > > Paul
> >
> > -------------------------------------------------------
> > This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170
> > Project Admins to receive an Apple iPod Mini FREE for your judgement on
> > who ports your project to Linux PPC the best. Sponsored by IBM.
> > Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php
> > _______________________________________________
> > Xen-devel mailing list
> > Xen-devel@xxxxxxxxxxxxxxxxxxxxx
> > https://lists.sourceforge.net/lists/listinfo/xen-devel
>
>
> -------------------------------------------------------
> This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170
> Project Admins to receive an Apple iPod Mini FREE for your judgement on
> who ports your project to Linux PPC the best. Sponsored by IBM.
> Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxxxx
> https://lists.sourceforge.net/lists/listinfo/xen-devel
-------------------------------------------------------
This SF.Net email is sponsored by: YOU BE THE JUDGE. Be one of 170
Project Admins to receive an Apple iPod Mini FREE for your judgement on
who ports your project to Linux PPC the best. Sponsored by IBM.
Deadline: Sept. 24. Go here: http://sf.net/ppc_contest.php
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel
|