xen-users
RE: [Xen-users] AoE (Was: iscsi vs nfs for xen VMs)
Jeff Sturm wrote:
That said, the aoe protocol also supports an asynchronous write
operation, which I suppose really is "fire and forget", unlike normal
reads and writes. I haven't used an aoe driver that implements
asynchronous writes however, and I'm not sure I would if I had the
option since you have no guarantee that the writes succeed.
Agreed, I can't see many uses. However there are some applications
(such as video capture for instance) where it may be better to miss
an occasional block than to suffer the overhead of error correction.
I know someone who works for a surveying outfit where they drive
around with vans (a bit like the Google camera cars) recording video
of roads etc - the video is analysed later by the client to spot
things like broken street lights, potholes, or whatever it is they
are looking for. IN a situation like this, it's better to have a
glitch in the video than potentially lose a big chunk because the
system pauses to correct an error. As it is, they use SDLT tape (I
think) because it's cheaper* than spinning disk and more suitable for
the streaming data streams they are writing.
* Presumably at the time the decision was made, I suspect that may
have changed now.
James Harper wrote:
I use DRBD locally and used to regularly see messages about concurrent
outstanding requests to the same sector. DRBD logs this because it can't
guarantee the serialising of requests so two write requests to the same
sector might be reordered at any layer different between the two
servers. It sounds like AoE would make this even worse if the 'first'
write was lost resulting in the 'second' write being performed first
followed by the 'first' write.
Bear in mind that with modern disks it is normal for them to have
command queuing and reordering built in. So unless you specifically
turn it off, your carefully ordered writes may be re-ordered by teh
drive itself.
Jeff Sturm wrote:
> I must admit, AoE does seem to have it's upsides - in past threads
(here and
elsewhere) I've only ever seen it being criticised.
Many of those threads seem to delve into performance claims, which isn't
very helpful in order to objectively compare the protocols. I don't
frankly care that either of iSCSI or AoE is more efficient than the
other by a few percent on the wire--if your storage implementation
depends on such a small margin to determine success or failure, think
very carefully about your tolerances. You'd better give yourself more
headroom than that.
Although the reality is complex, the basic truth is that networks are
fast and (non-SSD) disks are slow. On sequential performance, a good
disk will have more bandwidth than a single GigE link, but under any
sort of random I/O the disk latency dominates all others and network
performance is marginalized. And you can forget about relying on the
performance of sequential I/O in any large application cluster with e.g.
tens of nodes and central storage.
The real benefit of AoE that seems to get lost on its detractors is its
simplicity. The protocol specification is brief and the drivers are
easy to install and manage. The protocol supports self-discovery (via
broadcast) so that once you connect your initiator to your targets and
bring your Ethernet interface up, device nodes just appear and you can
immediately use them exactly as you would local devices. Multipath over
AoE can be as easy as connecting two or more Ethernet interfaces rather
than one--the new transports will be discovered and utilized with zero
incremental configuration provided your targets and initiators support
it, as the commercial ones I use do.
The supposed benefits of iSCSI, which include security and routeability,
and meaningless to me. Whether I use iSCSI or not I would never let my
storage network touch any of our general networks. I want my storage
connected to my hosts over the shortest path possible, if not with
crossover cables, then with a dedicated switch. AoE is not inherently
more or less secure than a SAS cable, and shouldn't be, since you need
to physically secure your storage regardless of interconnects. For me
the security features of iSCSI only add to the complexity and overhead
inherent in the protocol.
Thanks, that's a useful insight.
--
Simon Hobson
Visit http://www.magpiesnestpublishing.co.uk/ for books by acclaimed
author Gladys Hobson. Novels - poetry - short stories - ideal as
Christmas stocking fillers. Some available as e-books.
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- Re: [Xen-users] iscsi vs nfs for xen VMs, (continued)
- Re: [Xen-users] iscsi vs nfs for xen VMs, Adi Kriegisch
- Re: [Xen-users] AoE (Was: iscsi vs nfs for xen VMs), Simon Hobson
- Re: [Xen-users] AoE (Was: iscsi vs nfs for xen VMs), Javier Guerra Giraldez
- Re: [Xen-users] AoE (Was: iscsi vs nfs for xen VMs), Simon Hobson
- Re: [Xen-users] AoE (Was: iscsi vs nfs for xen VMs), John Madden
- Re: [Xen-users] AoE (Was: iscsi vs nfs for xen VMs), Javier Guerra Giraldez
- Re: [Xen-users] AoE (Was: iscsi vs nfs for xen VMs), Simon Hobson
- RE: [Xen-users] AoE (Was: iscsi vs nfs for xen VMs), Jeff Sturm
- RE: [Xen-users] AoE (Was: iscsi vs nfs for xen VMs),
Simon Hobson <=
- RE: [Xen-users] AoE (Was: iscsi vs nfs for xen VMs), James Harper
- RE: [Xen-users] AoE (Was: iscsi vs nfs for xen VMs), Jeff Sturm
- RE: [Xen-users] AoE (Was: iscsi vs nfs for xen VMs), James Harper
- RE: [Xen-users] AoE (Was: iscsi vs nfs for xen VMs), Jeff Sturm
- RE: [Xen-users] AoE (Was: iscsi vs nfs for xen VMs), James Harper
- Re: [Xen-users] iscsi vs nfs for xen VMs, Pasi Kärkkäinen
- Re: [Xen-users] iscsi vs nfs for xen VMs, Bart Coninckx
- Re: [Xen-users] iscsi vs nfs for xen VMs, Pasi Kärkkäinen
- Re: [Xen-users] iscsi vs nfs for xen VMs, Bart Coninckx
- Re: [Xen-users] iscsi vs nfs for xen VMs, Pasi Kärkkäinen
|
|
|