WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] iSCSI target - run in Dom0 or DomU?

The MD nightmare is predicate on a poor naming convention. I have a 30
node Xen grid up now running off of 4x 2TB sans, but I set them up
logically, similar to an excel sheet.

Each node having 2 nics , the second being private gig-e on its own
switch just being used for san and migration traffic. 

Nodes (dom-0 hosts) become sorted into rows and columns , representing
their positions on the racks (we're using blades). I.e.

1-a is top left hand corner. 1-b is next to it, 2-b would be just under
it.

A third identifier is added to signify its role. I.e. , 1-a-0 is dom-0
on node 1-a, 1-a-3 would be dom-u #3 on node 1-a. 

This makes it really easy to keep apples to apples and oranges to
oranges. If properly scripted MD does do the job well. 

I first thought about drbd, but I kept KISS in mind and wanted to plan
the grid in such a way that any slightly-better-than-average admin could
handle it because I hate 3AM wake up calls. 

So the end result is, once you setup key pairing (or your interconnect
of choice between the nodes), you get the simplicity of open SSI, i.e. -

onnode 1-b-0 do-this-cmd
onclass someclass do-this-cmd
onall do-this-cmd

... with a few hours worth of bash scripting. (actually I use Herbert's
dash port from bsd).

I keep a file similar to /etc/hosts that will soon be on a gfs
filesystem shared between dom-0 hosts that maps which iscsi target lands
where, really helps.. just create a lockfile named .1b1-migration or
whatever and toss it on that shared fs, in it list the 2 san partitions
being synced for migration and remove it when done. Have a cron job
check the age of .*-migration if its unsupervised as you indicated. 

That can obviously work with drbd too, but its point was to keep things
organized enough so a simpler and better known to all technology could
be used for the sans, and with a little more hammering its not too
difficult to setup isolated single system image load balanced arrays.

You may also want to check out pound, http://http://www.apsis.ch/pound/

I've had some success on smaller scale (pushing about 250 - 400meg)
using pound, but have yet to use it on anything heavier. If your network
can afford to try it out - it would be very meaningful data. Pound has a
very basic heartbeat that needs improvement, don't rely on pound to know
if a node isn't reachable especially under stress. 

Just remember to give dom-0 adequate ram for the bridges and initiators
needed to pull it all off. I allow 128 per initiator and 32MB per
bridge .. some people say allow more, some less.. but that seems to work
as a rule of thumb for me.

This is easy and straight forward to pull off, but does involve quite a
bit of work.. but well .. you're talking about a "smart" auto scaling
auto migrating system .. so I'm guessing you expected that :)

Or you could just use OpenSSI as a HVM... or use 2.0.7 and the xen-ssi
kernel. THAT would be the easiest route.. but who likes that? :P

HTH
-Tim

On Fri, 2006-08-25 at 15:17 +0100, Matthew Wild wrote:
> On Friday 25 August 2006 14:49, Jason wrote:
> > I wonder, instead of drdb, what would happen if you exported both
> > storage servers iscsi targets to your xen machines and then used linux
> > software raid1 to mount them both and keep them in sync.
> >
> You'd end up with a management nightmare. Which dom0 would be maintaining the 
> RAID set? And you'd have to keep an eye on all the RAID configurations for 
> each virtual machines disk(s) on every dom0. You would also be trying to 
> write to both storage servers at the same time and that's not possible with 
> drbd.
> 
> Part of the point is to make the Xen servers as simple as possible, and 
> therefore interchangeable/replaceable, with little extra configuration to 
> support. Eventually I would expect to use drbd 8.0 on the storage servers, 
> giving primary/primary access, and use multipath-tools allowing me to 
> dispense with heartbeat on the storage servers.
> 
> Matthew


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users