WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Xen and iSCSI

Markus Hochholdinger wrote:

well, my idea of HA is as follows:
- Two storage servers on individual SANs connected to the Xen hosts. Each storage server provides block devices per iscsi.
I guess gnbd can be a drop-in replacement for iSCSI. I would think performance is better as gnbd is written for the Linux kernel - the SCSI protocol is written for hardware. I _know_ gnbd is easier to set up. You just point the client to the server and the client populates /dev/gnbd/ with the named entries (the devices are given logical names - no SCSI buses, devices or LUNS).

If we compare your iSCSI-based setup to a setup with Heartbeat/DRBD/GNBD-setup there might be some interesting points. You can choose for yourself if you want the DomUs to act as GNBD clients or if you want to access the GNBD servers directly from your DomU - or a combination (through Dom0 for rootfs/swap - and via GNBD for data volumes).

- On domU two iscsi block devices are combined to a raid1. On this raid1 we will have the rootfs.

Advantages:
- storage servers can easily upgraded. Because of raid1 you can savely disconnect on storage server and upgrade hard disk space. After resync the raid1 you can do the same with the other storage server.
The same with Heartbeat/DRBD/GNBD. You just fail one of the storage servers and upgrade it. After it is back up DRBD does an _incremental_ sync witch usually just takes a few seconds. With such a setup you can use a _dedicated_ link for DRBD.
- If you use a kind of lvm on the storage servers you can easily expand the exportet iscsi block devices (the raid1 and the filesystem has also to be expanded).
The same goes for Hearbeat/DRBD/GNBD I would guess.

- You can make live migration without configuring the destination Xen host specially (e.g. provide block devices in dom0 to export to domU) because all is done in domU.
GNBD clients are more or less stateless.
- If one domU dies or the Xen host you can easily start the domUs on other Xen hosts.

Disadvantages:
- When one storage server dies ALL domU have to rebuild their raid1 when storage this storage server comes back. High traffic on the SANs.

You will also have to rebuild a volume if a XenU dies while writing to disk.

 - Not easy to setup a new domU in this environment (lvm, iscsi, raid1)

iSCSI for rootfs sounds lke a lot of pain.
Not sure:
- Performance? Can we get full network performance in domU? Ideal is we can use full bandwith of the SANs (e.g. 1GBit/s). And if the SANs can handle this (i will make raid0 with three SATA disks in each storage server).
Remember that every write has to be written twice. So your write capacity might suffer a bit.

Per.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>