WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] iscsi vs nfs for xen VMs

Hi!

> > iSCSI tipically has a quite big overhead due to the protocol, FC, SAS,
> > native infiniband, AoE have very low overhead.
> > 
> 
> For iSCSI vs AoE, that isn't as true as you might think. TCP offload can
> take care of a lot of the overhead. Any server class network adapter
> these days should allow you to send 60kb packets to the network adapter
> and it will take care of the segmentation, while AoE would be limited to
> MTU sized packets. With AoE you need to checksum every packet yourself
> while with iSCSI it is taken care of by the network adapter.
What AoE actually does is sending a frame per block. Block size is 4K -- so
no need for fragmentation. The overhead is pretty low, because we're
talking about Ethernet frames.
Most iSCSI issues I have seen are with reordering of packages due to
transmission across several interfaces. So what most people recommend is to
keep the number of interfaces to two. To keep performance up this means you
have to use 10G, FC or similar which is quite expensive -- especially if
you'd like to have a HA SAN network (HSRP and stuff like that is required).

AoE does not suffer from those issues: Using 6 GBit interfaces is no
problem at all, load balancing will happen automatically, as the load is
distributed equally across all available interfaces. HA is very simple:
just use two switches and connect one half of the interfaces to one switch
and the other half to the other switch. (It is recommended to use switches
that can do jumbo frames and flow control)
IMHO most of the current recommendations and practises surrounding iSCSI
are there to overcome the shortcomings of the protocol. AoE is way more
robust and easier to handle.

-- Adi

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users