xen-users
Re: [Xen-users] what do you recommend for cluster fs ??
Well, if ATA meets your needs, that's fine; last time I used 7.5K SATA, I
had long "pauses" whenever all the other computers on a particular
disk/array ran their daily crontab, and other times of even moderately
high IO. On fibre, everything is pretty smooth. under load it can get
slow, I'm running on 2.0Ghz Xeons w/ a 400Mhz bus and pc2100 ram, but the
latencies are always low; the system is always responsive.
Also, all the external SATA chassis have been anything but commodity,
last time I looked (which admittedly was a while ago) both 3-ware and
Adaptec had competing, incompatible "standards" for their 4-lane
connectors, and as I recall, at the time the RAID cards almost cost as
much as the disks in the 14 bay supermicro we used.
Are there standard interconnects now, such that I can buy a disk chassis
from one company and be fairly certain it will connect to a storage
controller from another? (I really would like to know... I would like to
try out some of those 2.5" 10KRPM SAS drives- those look cool. but I
refuse to do so until there is an open standard supported by more than
one vendor.) I have seen fairly standard-looking sata JBOD cases that
used fibre channel interconnects; I will probably be buying some for
storage shortly;
Personally, I place zero value on "support" as even from the premium
vendors like EMC, and even when you escalate up to Engineering, you don't
get anyone that knows more than I do, and it takes days to get to that
point.
However, good manufacturer warranties are really nice- I'll pay a
significant premium for those. I always buy corsair ram for that reason;
awesome warranty. and new disks are nearly always from Seagate; Seagate
has excellent warranty support. You don't even have to talk to a person;
just fill out a webform (easy when you have a barcode scanner) and mail
them in.
So yeah, if there are standard SAS/SATA interconnects, I'd like to know
about them, as that seems to be where Seagate thinks things are going.
(and I like the way SAS scales. one bus per disk when you have as
many spindles as I do would equal some really nice thruput.. that's
the only thing the fibre disks lack. They are okay under heavy random
load where SATA chokes, but they are also only okay under sequential load,
where SATA flies. Of course, I don't see much sequential activity
(sequential activity from several hosts to the same disk equals random
load on the disk) so I go with the fibre. Still, good sequential
performance would make full system restores and a few other things run a
whole lot faster. you don't need to do a full system restore very often,
but when you do, you really, really need to do it. )
On Sep 25, 2006, at 11:17 AM, Luke Crawford wrote:
If you are going through the trouble of a second network, wouldn't using
1Gbit fibre make more sense? 1gbt fibre-channel is actually cheaper than
gigabit Ethernet, assuming you are buying name-brand equipment, comparing
used to used, and for disk use, fibre-channel is much faster than going
over the network. It also has basic disk organization/pesudo-security
built in already.
Not for me. The way I see things, with Coraid and AoE, I get to stay in
uber-commodity land with SATA disks, GbE cards and switches. The current AoE
drivers balance requests over multiple ports, so in a fully redundant
configuration you get nearly 2Gbps throughput.
I use pci-x qlogic 2200 cards (around $10 per) and Brocade silworm 2800
switches (around $100/each) along with whatever fibre arrays I can find (I
have a IBM EXP-500 right now- Nice! but it was $250. You can get used
dell/EMC 10 bay half-height arrays for little more than shipping; but
that's 'cause they are flimsy crap. After getting it shipped with drives
in it, you will have bad/flaky slots. I just ordered a Sun StorEdge
A5200 for around $150, but those are low-profile, and the half-height
drives are extremely cheap- you can get 10KRPM half-height 73G drives for
around $10/each. that goes up to $50 or so for the low-profile drives of
the same spec, and you only get a 30% density improvement.)
I like the fact that the stuff I'm buying is reasonably inexpensive, and is
brand new with manufacturer warranties.
Additionally, I like the fact that it's all headed in the right direction for
10GE soon, when those prices drop as well.
--
-- Tom Mornini
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- Re: [Xen-users] what do you recommend for cluster fs ??, (continued)
- Re: [Xen-users] what do you recommend for cluster fs ??, Tim Post
- Re: [Xen-users] what do you recommend for cluster fs ??, Brad Plant
- Re: [Xen-users] what do you recommend for cluster fs ??, Tim Post
- Re: [Xen-users] what do you recommend for cluster fs ??, Brad Plant
- Re: [Xen-users] what do you recommend for cluster fs ??, Martin Hierling
- Re: [Xen-users] what do you recommend for cluster fs ??, Tom Mornini
- Re: [Xen-users] what do you recommend for cluster fs ??, Ceri Storey
- Re: [Xen-users] what do you recommend for cluster fs ??, Tom Mornini
- Re: [Xen-users] what do you recommend for cluster fs ??, Tod Detre
- Re: [Xen-users] what do you recommend for cluster fs ??, Luke Crawford
- Message not available
- Re: [Xen-users] what do you recommend for cluster fs ??,
Luke Crawford <=
- Re: [Xen-users] what do you recommend for cluster fs ??, Tom Mornini
- Re: [Xen-users] what do you recommend for cluster fs ??, Luke Crawford
- Re: [Xen-users] what do you recommend for cluster fs ??, Tom Mornini
[Xen-users] Re: [Fedora-xen] what do you recommend for cluster fs ??, Rik van Riel
Re: [Xen-users] what do you recommend for cluster fs ??, Tom Mornini
|
|
|