I've been working on putting togeather a system to manage clusters of Xen
nodes. I can't go into a lot of details yet since it's only in the design
stage right now.
As for stable enough to host, I'm using a 1.3 version from May or June (can't
remember the version off the top of my head) to do virtual server hosting for
my own machines and for several clients. Interestingly enough, because of the
setup I moved to I'm getting better overall performance using 2 main NFS
servers with 8-disk raid-5 arrays than with individual machines using mirror
sets!
I will be looking for anyone interested in using the prototype tools later once
we have picked up the new set that Ian and friends are coding for everyone, and
have rewritten things based on them. :)
On Sun, 4 Jul 2004 13:31:39 -0700
Steve Traugott <stevegt@xxxxxxxxxxxxx> said...
On Sun, Jul 04, 2004 at 10:11:38AM +0100, Ian Pratt wrote:
>
> > Still hunting for better alternatives vs. NFS roots -- does anyone know
> > what I'd need to do to get the driver for QLogic fibre HBA's working, so
> > I can host VBD's from a SAN?
>
> If there's a driver in Linux, it should just work if you use the
> unstable-xeno tree and modify the config for the domain 0 linux
> to add the driver.
So, how stable is unstable these days? I.E. would you trust it to host
other people's guests?
> In the new tree, rather than using our own virtual disk code we're
> planning on using standard Linux's standard LVM2 code to enable
> physical partitions to be sliced and diced. The tool support for
> this isn't quite there yet.
That sounds good -- you mean tools support as in python? I should be
able to help if that's the case.
> An alternative to using a FCAL SAN is to use iSCSI. I've found
> that the Linux Cisco iSCSI initiator code works nicely, and can
> either talk to a hardware iSCSI target or to the Ardistech Linux
> iSCSI s/w target. I've generally configured it such that the
> domain talks iSCSI directly (using an initrd to enable root to be
> on the iSCSI volume). Others have configured iSCSI in domain 0
> and then exported the partitions to other domains as block
> devices using the normal VBD mechanism.
I'd need to use the iSCSI in domain 0 approach (other people's
guests...), haven't tried it due to lack of hardware targets, didn't get
warm fuzzies from Ardistech's code -- you've had no problems with it
though?
Steve
--
Stephen G. Traugott (KG6HDQ)
UNIX/Linux Infrastructure Architect, TerraLuna LLC
stevegt@xxxxxxxxxxxxx
http://www.stevegt.com -- http://Infrastructures.Org
-------------------------------------------------------
This SF.Net email sponsored by Black Hat Briefings & Training.
Attend Black Hat Briefings & Training, Las Vegas July 24-29 -
digital self defense, top technical experts, no vendor pitches,
unmatched networking opportunities. Visit www.blackhat.com
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel
|