This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-users] what do you recommend for cluster fs ??

To: Tom Mornini <tmornini@xxxxxxxxxxxxxx>
Subject: Re: [Xen-users] what do you recommend for cluster fs ??
From: Luke Crawford <lsc@xxxxxxxxx>
Date: Mon, 25 Sep 2006 13:46:25 -0700 (PDT)
Cc: Xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 25 Sep 2006 13:47:12 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <4E39CBF6-3F00-4037-8B1D-97211E651606@xxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <acb757c00609230744q767c1d72jeb70304fccb5db08@xxxxxxxxxxxxxx> <45155D1D.8070804@xxxxxxx> <4515CA94.6000109@xxxxxxxxx> <4515CE96.8020300@xxxxxxx> <4eb282840609241003p739779f8yec4a2cff5e96adf0@xxxxxxxxxxxxxx> <45176415.1070102@xxxxxxxxxxxxxx> <1159168155.25091.78.camel@xxxxxxxxxxxxxxxxxxxxx> <45179240.1090806@xxxxxxxxxxxxxx> <4eb282840609250809l59cf0509i93c05e7c9c8f4eef@xxxxxxxxxxxxxx> <80117592-FBB7-4BD9-9571-B055D5FBE2D5@xxxxxxxxxxxxxx> <20060925170056.GC16083@xxxxxxxxxxxxxxxx> <6188A849-4B2A-4574-9D63-3B8A8585C790@xxxxxxxxxxxxxx> <Pine.NEB.4.64.0609251102470.25321@xxxxxxxxxxxxxxxxxx> <4E39CBF6-3F00-4037-8B1D-97211E651606@xxxxxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx

Well, if ATA meets your needs, that's fine; last time I used 7.5K SATA, I had long "pauses" whenever all the other computers on a particular disk/array ran their daily crontab, and other times of even moderately high IO. On fibre, everything is pretty smooth. under load it can get slow, I'm running on 2.0Ghz Xeons w/ a 400Mhz bus and pc2100 ram, but the latencies are always low; the system is always responsive.

Also, all the external SATA chassis have been anything but commodity, last time I looked (which admittedly was a while ago) both 3-ware and Adaptec had competing, incompatible "standards" for their 4-lane connectors, and as I recall, at the time the RAID cards almost cost as much as the disks in the 14 bay supermicro we used.

Are there standard interconnects now, such that I can buy a disk chassis from one company and be fairly certain it will connect to a storage controller from another? (I really would like to know... I would like to try out some of those 2.5" 10KRPM SAS drives- those look cool. but I refuse to do so until there is an open standard supported by more than one vendor.) I have seen fairly standard-looking sata JBOD cases that used fibre channel interconnects; I will probably be buying some for storage shortly;

Personally, I place zero value on "support" as even from the premium vendors like EMC, and even when you escalate up to Engineering, you don't get anyone that knows more than I do, and it takes days to get to that point.

However, good manufacturer warranties are really nice-  I'll pay a
significant premium for those. I always buy corsair ram for that reason; awesome warranty. and new disks are nearly always from Seagate; Seagate has excellent warranty support. You don't even have to talk to a person; just fill out a webform (easy when you have a barcode scanner) and mail them in.

So yeah, if there are standard SAS/SATA interconnects, I'd like to know about them, as that seems to be where Seagate thinks things are going. (and I like the way SAS scales. one bus per disk when you have as many spindles as I do would equal some really nice thruput.. that's the only thing the fibre disks lack. They are okay under heavy random load where SATA chokes, but they are also only okay under sequential load, where SATA flies. Of course, I don't see much sequential activity (sequential activity from several hosts to the same disk equals random load on the disk) so I go with the fibre. Still, good sequential performance would make full system restores and a few other things run a whole lot faster. you don't need to do a full system restore very often, but when you do, you really, really need to do it. )

On Sep 25, 2006, at 11:17 AM, Luke Crawford wrote:

If you are going through the trouble of a second network, wouldn't using 1Gbit fibre make more sense? 1gbt fibre-channel is actually cheaper than gigabit Ethernet, assuming you are buying name-brand equipment, comparing used to used, and for disk use, fibre-channel is much faster than going over the network. It also has basic disk organization/pesudo-security built in already.

Not for me. The way I see things, with Coraid and AoE, I get to stay in uber-commodity land with SATA disks, GbE cards and switches. The current AoE drivers balance requests over multiple ports, so in a fully redundant configuration you get nearly 2Gbps throughput.

I use pci-x qlogic 2200 cards (around $10 per) and Brocade silworm 2800 switches (around $100/each) along with whatever fibre arrays I can find (I have a IBM EXP-500 right now- Nice! but it was $250. You can get used dell/EMC 10 bay half-height arrays for little more than shipping; but that's 'cause they are flimsy crap. After getting it shipped with drives in it, you will have bad/flaky slots. I just ordered a Sun StorEdge A5200 for around $150, but those are low-profile, and the half-height drives are extremely cheap- you can get 10KRPM half-height 73G drives for around $10/each. that goes up to $50 or so for the low-profile drives of the same spec, and you only get a 30% density improvement.)

I like the fact that the stuff I'm buying is reasonably inexpensive, and is brand new with manufacturer warranties.

Additionally, I like the fact that it's all headed in the right direction for 10GE soon, when those prices drop as well.

-- Tom Mornini

Xen-users mailing list

<Prev in Thread] Current Thread [Next in Thread>