[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] AoE vs iSCSI



Hello,

Am 18.03.2010 um 14:32 Uhr schrieb Jeff Sturm <jeff.sturm@xxxxxxxxxx>:
> > -----Original Message-----
> > From: xen-users-bounces@xxxxxxxxxxxxxxxxxxx [mailto:xen-users-
> > bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Markus Hochholdinger
> > Sent: Wednesday, March 17, 2010 3:37 PM
> > To: xen-users@xxxxxxxxxxxxxxxxxxx
> > Cc: Chris 'Xenon' Hanson
> > Subject: Re: [Xen-users] AoE vs iSCSI
> > i testet AoE and iSCSI. AoE scales very bad! If you have more than 10
> AoE devices
> > over one NIC you get on one AoE device bad througput and a high load
> on the system.
> > iSCSI with lots (testet with over 200) of LUNs will performe very
> good. I was able to
> > get a little more than 100MByte/s over one 1GBit/s NIC with iSCSI!
> All other things being equal, one protocol should not outperform the
> other by such a wide margin.  Your results obviously will depend on the
> quality of the implementation--i.e. whether you've chosen one of the
> open source AoE targets, or you are using a storage appliance with AoE,
> which OS/driver version, etc.
> AoE performance is also highly dependent on your network.  Always use
> jumbo frames and hardware flow control.  If you have a switch that
> doesn't handle these, get a new switch.

i made these test in august 2008. I've tested gnbd, AoE (vblade-18 and 
aoe6-63.tar.gz) and iSCSI. All on the same hardware with the same 
(dom0-)kernel. For gnbd and iSCSI i didn't optimze anything. For AoE, because 
of the bad performance, i optimized the network settings, i had a direct 
connection between two servers, so no switch configuration. I enabled jumbo 
frames on the NICs and a few other things i don't remember now.

The really bad things was that the AoE client had very bad performance while 
more than one or two blades were connected, but only one was used!

Example: One server with vblade exported one block device over 1GBit/s NIC. On 
the other server, the client, i got ~100MByte/s as expected. If i configured 
10 vblades on the server, connected all 10 to the client and then testet one 
ether device, i got only ~20MByte/s throughput. Then i configured 100 and got 
only ~1MByte/s throughput!

At this time i choosed to use iSCSI in the future, before that i used gnbd. I 
never tested AoE again, perhaps the situation is now better? But i give 
everyone the advice with AoE to test performance with more than one connected 
AoE block device, if this will be needed.


-- 
greetings

eMHa

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.