On Sat, 30 Sep 2006, Tom Mornini wrote:
On Sep 30, 2006, at 9:38 PM, Luke Crawford wrote:
If you are the boss, used 1G FC is quite a bit cheaper (and faster) than
used 1G Ethernet.
Faster, probably (I'm certainly not arguing), but the big storage vendors
have recently said that 4Gb fiber will be the top speed for years to come.
Yes, I think I mentioned that this equation may change when 10G Ethernet
becomes affordable. I would be suprised if FC was still the best choice 5
years from now; however, even if I go with 1G ethernet now, I'll still
have to buy all new equipment when the 10G stuff comes out, so I might as
well get the most performance for my dollar now.
(now, I predict that SAS, and not 10G ethernet will be the best solution
5 years from now, I would be using SAS now if I could buy affordable
components from different vendors and reasonably expect them to work
together as I can with FC. Of course, this just my prediction, and it is
worth exactly what you paid for it.)
Cheaper? Are you talking about buying used FC disks as well? Because FC disks
-vs- SATA disk is no comparison in terms of $/GB. It's my understanding that
*most* FC solutions require FC disks...
You can get 12-bay SATA -> FC arrays for around $1K. If you know where to
get cheaper SATA-> gigabit Ethernet arrays, I'd like to know about it.
http://cgi.ebay.com/EMC-AX100-Fibre-Channel-SATA-Drive-Array_W0QQitemZ270035121613QQihZ017QQcategoryZ80219QQssPageNameZWDVWQQrdZ1QQcmdZViewItem?hash=item270035121613
I think 7.3Krpm SATA drives are not up to snuff for virtual hosting (at
least not on my systems- these disks are quite shared and heavily used;
when I was using SATA, I had disk i/o latency issues with only 10
dns/mail/internal infrastructure servers.) I imagine a write-back cache
of some sort (that most high-end redundant NAS units have- or simply
mounting all your disks async) would solve this problem, but it is rather
expensive to do that properly. Most SATA nas units are just a single PC,
so if they enable write-back caching and the box panics or your new admin
pulls the power plug, you have issues.
but really, if you can get away with IDE disk, you can probably get away
with NFS over 100Mbps, which is cheaper and easier than FC.
However, most bosses refuse to use used stuff; and some people think that
commodity Ethernet will scale faster than commodity FC, so it's better to
just run Ethernet everywhere. (these people may be right; my point still
stands that 1G fibre channel, bought used, gives you better storage
performance per dollar than 1G Ethernet)
Performance, yes, but how about capacity? And just how much faster is it?
For me, capacity is a minor issue compared to latency under heavy
concurrent access. I think IOPS/sec is where SCSI (and SCSI over FC) is
really where the FC/SCSI disks show their worth.
(and yes, I usually use used disks; I mirror them and run a SMART monitor
on them, so the reduced reliability isn't a huge deal. I would *not*
recommend using raid5 with used disks- well, I don't recommend raid5 in
general, except for a substitute for a stripe that is less of a pain-in
the ass to rebuild, simply because raid5 performance drops precipitously
during a rebuild; your array is essentially down for a day if you are
running it near capacity.)
Like I said, if you are just going for capacity, use IDE over NFS on a
10/100 network. a 1000 network might be worth it if most of your stuff
is sequential (as IDE comes pretty darn close and sometimes beats SCSI for
sequential access) but in my environment, there really is no such thing as
sequential access.
My main point was that compared to a 1GB Ethernet 'dedicated to storage'
network, a 1GB FC network is cheaper and faster; I believe this to be
true even if your end disks are SATA. (but like I said, I might be wrong
on that part; I'm not really familiar with pricing for network-attached
IDE arrays; I can't afford gigabit Ethernet equipment of a quality I'd
like to maintain, and I use 10 or 15K scsi/FC for everything that
matters anyhow.)
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|