WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Xen and iSCSI

To: Per Andreas Buer <per.buer@xxxxxxxxx>
Subject: Re: [Xen-users] Xen and iSCSI
From: Alvin Starr <alvin@xxxxxxxxxx>
Date: Mon, 30 Jan 2006 09:18:50 -0500
Cc: xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 30 Jan 2006 14:33:04 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <43DCE65B.80308@xxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <200601281724.47989.Markus@xxxxxxxxxxxxxxxxx> <43DCA940.50401@xxxxxxxxx> <200601291400.14736.Markus@xxxxxxxxxxxxxxxxx> <43DCE65B.80308@xxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla Thunderbird 1.0.7-1.1.fc3 (X11/20050929)
Per Andreas Buer wrote:

Markus Hochholdinger wrote:


well, my idea of HA is as follows:
- Two storage servers on individual SANs connected to the Xen hosts. Each storage server provides block devices per iscsi.

I guess gnbd can be a drop-in replacement for iSCSI. I would think performance is better as gnbd is written for the Linux kernel - the SCSI protocol is written for hardware. I _know_ gnbd is easier to set up. You just point the client to the server and the client populates /dev/gnbd/ with the named entries (the devices are given logical names - no SCSI buses, devices or LUNS).

If I remember correctly gnbd is not quite the same as iscsi. When I looked into using gnbd I figured I could not create a target disk device that would present 10-20 unique devices to the xen clients. I am using lvm to break apart a set of disks and then presenting each volume as a separate iscsi target.

I did not think the same thing could be done with GNBD but then I started this about a year ago so the rules may have changed in the interfening time.


If we compare your iSCSI-based setup to a setup with Heartbeat/DRBD/GNBD-setup there might be some interesting points. You can choose for yourself if you want the DomUs to act as GNBD clients or if you want to access the GNBD servers directly from your DomU - or a combination (through Dom0 for rootfs/swap - and via GNBD for data volumes).

- On domU two iscsi block devices are combined to a raid1. On this raid1 we will have the rootfs.

Advantages:
- storage servers can easily upgraded. Because of raid1 you can savely disconnect on storage server and upgrade hard disk space. After resync the raid1 you can do the same with the other storage server.

The same with Heartbeat/DRBD/GNBD. You just fail one of the storage servers and upgrade it. After it is back up DRBD does an _incremental_ sync witch usually just takes a few seconds. With such a setup you can use a _dedicated_ link for DRBD.

That is a nice feature.

- If you use a kind of lvm on the storage servers you can easily expand the exportet iscsi block devices (the raid1 and the filesystem has also to be expanded).

The same goes for Hearbeat/DRBD/GNBD I would guess.

- You can make live migration without configuring the destination Xen host specially (e.g. provide block devices in dom0 to export to domU) because all is done in domU.

GNBD clients are more or less stateless.

- If one domU dies or the Xen host you can easily start the domUs on other Xen hosts.

Disadvantages:
- When one storage server dies ALL domU have to rebuild their raid1 when storage this storage server comes back. High traffic on the SANs.


You will also have to rebuild a volume if a XenU dies while writing to disk.

 - Not easy to setup a new domU in this environment (lvm, iscsi, raid1)


iSCSI for rootfs sounds lke a lot of pain.

Not sure:
- Performance? Can we get full network performance in domU? Ideal is we can use full bandwith of the SANs (e.g. 1GBit/s). And if the SANs can handle this (i will make raid0 with three SATA disks in each storage server).

Remember that every write has to be written twice. So your write capacity might suffer a bit.

Has anybody built a system using gnbd that supports several dom0 systms and migrating domU's?

--
Alvin Starr                   ||   voice: (416)585-9971
Interlink Connectivity        ||   fax:   (416)585-9974
alvin@xxxxxxxxxx              ||


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

<Prev in Thread] Current Thread [Next in Thread>