Hi,
Am Dienstag, 5. September 2006 22:19 schrieb Julian Davison:
> Hans de Hartog wrote:
> > saptarshi naha wrote:
> > If you're into HA, the first rule is: eliminate SPOFs
> > (Single Points Of Failure). Your physical box IS a SPOF.
> > So, running more operating systems on a single box (xen)
> > does not give you more availability (on the contrary, the
> > more dom's you're running, the more dom's die if your box
> > dies).
> > Therefore, using DRBD within xen-domains on the same physical
> > box doesn't give you more availablity either.
> That makes sense :)
well, HA is not _only_ SPOF but also the time to recover from a failure.
> > In general, (IMHO) xen buys you nothing for HA.
> However, I thought the advantage of xen for HA was
> primarily due to easy migration. If things are about
> to fall over it's a simple process to move the domains
> to other hardware.
Yes, and this is the way i do it:
- 4 servers overall
- 2 servers as gnbd server, each in an extra physical GBit network connected
to the 2 dom0 servers
- 2 servers as dom0 server and also gnbd client
- the 2 gnbd servers are configured equal except ip address
- the 2 dom0 servers are configured equal except ip address
- domUs where build as follow
* sda1 (gnbd server 1) and sdb1 (gnbd server 2) as md0 as rootfs
* sda2 (gnbd server 1) and sdb2 (gnbd server 2) as md1 as swap
* network eth0 bridged in dom0 to external network
The result is, i can do live migration of domUs between dom0 server 1 and
server 2.
If one gnbd server fails, the raid inside domU degrades but works.
If the dom0 fails my domU is running i have to do manual failover (but can
also be automated), that means starting my domU on the other dom0 server.
In one of my setups i've combined dom0 with gnbd server, so i only need two
servers.
I've also a setup where the external network connection (internet) has no
SPOF. That is, two network cards bounded in dom0 for failover, connected to
two switches, these two switches are connected to two ha firewalls, and each
firewall has its own cable to the internet which goes out in different
directions of the building the servers are in (well, this is _extreme_).
advantages
- The only single point of failure is the cpu a domU is running on.
- raid1 inside domU makes it possible to resize filesystem without
interruption of the domU (you need to block-detach and block-attach for a
resized block device to let domU notice this resice). This means:
* degrade raid1, fail and remove one block device out of the raid1
* block-detach
* resize block device on gnbd server, should be a logical volume
* block-attach
* rebuild raid1
* degrade raid1, fail and remove the other block device out of the raid1
* block-detach
* resize other block device on gnbd server, should be a logical volume
* block-attach
* rebuild raid1
* grow raid1
* resize filesystem
(it is tested and works well)
- Disk i/o performance. The gnbd servers are connected over Gbit network. The
disks on each gnbd server are raid0 (striped) for full performance.
Reliability is get through the other gnb server. In my tests i get ~60MB/s
to the filesystem (dd) in a domU. Performance can be boosted with more
striped disks (in my cases i've done this with two or three cheap sata
disks). In one of my setups i've the gnbd servers connected with two GBit
network cards bounded for double performance because there are 6 slots for
disks but now only three are used.
Remember, here you have something like a san build out of cheap hardware!
(The raid0 is not really a raid0 but physical volumes for logical volume
manager where i make striped logical volumes out of in my case.)
- You can use the power of both dom0s and have only to start all domUs on one
dom0 if the other fails. This means in a dom0 failure the domUs are getting
less memory and cpu power but they WORK. And if no failure is there you get
all the power you have bought.
- You resp. the domU Admins have to care only about there single domU as it
would be a single hardware server but have also the advantages of HA.
- Clean design. Only commonly used and approved techniques are used here (GBit
ethernet network, gnbd, raid1, Xen). OK, gnbd and xen are not in the main
line kernel tree but gnbd is now used a long time by redhat and xen
(hopefully) will get into the main kernel tree.
possibilities
- In my setups i do rsync/hardlink backup on the gnbd servers in an extra
backup partition, which i resize as i need space. The backups run in
different times on each gnbd server, so i've backups of different times on
the different gnbd servers. I can hold as much backups as much space i have.
And i do tape backups from the gnbd servers. Also my backup script automtic
backups new logical volumes, so i don't mind anymore the backups.
- You can have more than two gnbd servers for more performance or more
availability.
- You can have more than two dom0s servers for more cpu performance.
- You can make your network also fail safe like i mentioned above.
- You can make snapshots (lvm) of the disks of your domUs and can start the
domU on a different IP to test things like updates.
drawbacks
- You can get maximal the cpu power of one hardware server! No HPC!
- You have to care about your raid1 inside the domUs. In my case, scripts from
the domUs do this and rebuild automatic after a failere recovers. With block
devices over networks it can happen more often than block devices on scsi or
sata.
- I've to write my own fenced for gnbd because the shipped ones didn't fit my
needs.
- You have to watch your memory consumption on the dom0s if you have to fail
over another dom0.
Well, there may be more pros and cons for my solution. The only thing i can
say, i've one setup with four servers running since april 2006 and one setup
with two servers (gnbd server and dom0 is the same hardware) since june 2006
without problems. The next setup with four servers is in work (there will be
a lot of old hardware migrated to it) and will soon be productive.
I've also a lot of other xen-Hosts in production use, but they are on a single
hardware only. But the backup and raid1 thing i also use within the single
xen hosts.
--
greetings
eMHa
pgphnged1EulM.pgp
Description: PGP signature
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|