WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] iSCSI target - run in Dom0 or DomU?

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] iSCSI target - run in Dom0 or DomU?
From: Thomas Harold <tgh@xxxxxxxxxxxx>
Date: Fri, 25 Aug 2006 11:44:45 -0400
Delivery-date: Fri, 25 Aug 2006 08:46:03 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <200608251443.17965.M.Wild@xxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <44ED9748.3090906@xxxxxxxxxxxx> <200608241249.23436.javier@xxxxxxxxxxx> <44EEF226.7010507@xxxxxxxxxxxx> <200608251443.17965.M.Wild@xxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 1.5.0.5 (Windows/20060719)
Matthew Wild wrote:
What I've been building is pretty much the same as this. We have 2 storage servers with 5TB usable storage each, replicating through drbd. These then run iscsitarget to provide lvm based iSCSI disks to a set of Xen servers using open-iscsi. The vitual machines are then set up using these physical disks. Because the iscsi devices can have the same /dev/disk/by-id or /dev/disk/by-path labels on each Xen dom0, you can create generic config files that will work across all the servers. Also, even though drbd is a primary/secondary replication agent at the moment, everything is quite happy for multiple Xen dom0s to connect to the disks, allowing for very quick live migration.

I haven't gone quite so far with multiple switches etc., but we are using VLANs to separate the dom0 traffic (eth0), domUs (eth1), and iSCSI (eth2). All on Gb networking. We are also thinking of putting 10Gb links between the storage servers to keep drbd happy.

Excellent news.  Did you document your setup anywhere public?

This all started because we're getting ready to add 2 new servers to our motley mix of individual servers with DAS and I had a bit of a brainflash where I finally saw how Xen + SAN could play together. Eventually, we should be able to pack our 6-8 servers down into 2-4 Xen "head" units and a pair of redundant storage units. The firewall / border boxes will remain as separate units (although possibly running Xen + Shorewall).

For a modest amount of complexity, we'll gain an enormous amount of flexibility. Which will hopefully result in a lot less stress for me. No more laying awake at night worrying about what happens when server hardware fails.

(And doing this myself forces me to learn the technology, which is worthwhile.)

...

My initial thought was also to use Software RAID across the two SAN units. Export a block device from each SAN unit via iSCSI, then have the DomU manage its own RAID1 array. But I'll look closer into DRBD since that seems to be the preferred method.

...

The proposed topology for us (full build-out) was:

(2) SAN units with 5 NICs
(3) gigabit switches
(4) Xen units with 3 NICs (maybe 5 NICs)

Switch A is for normal LAN traffic

Switch B & C are for iSCSI traffic, with the two switches connected, possibly via 4 bonded/trunked ports for a 4 gigabit backbone, or using one of the expansion ports to link the two switches.

Each SAN unit will have a single link to the LAN switch for management (via ssh) and monitoring. Then the other 4 NICs would be bonded into two pairs and attached to switch B & C for fault-tolerance and for serving up the iSCSI volumes.

Xen head units would have 3 or 5 NICs. One for connecting to the LAN switch to provide client services to users. The other NICs to connect to SAN switches B & C for fault-tolerance (with the possibility of bonding for more performance if we go with 5 NICs).

One change that I'm going to make since you're talking about DRBD wanting a faster inter-link is to add 2 more NICs to the SAN units (for a total of 7 NICs). The additional 2 NICs could then be bonded together and linked directly to the other SAN unit via cross-over cables.

But for the start, I'll be running all iSCSI traffic over an existing gigabit switch. We're going to install a 48-port gigabit switch soon which will free up the existing 24-port gigabit switch for use with the SAN. (Our LAN is still mostly 10/100 hubs with a gigabit core.) ETA for installing the 2nd switch and 2nd SAN unit is probably 6-12 months.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users