WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] iscsi vs nfs for xen VMs

To: Pasi Kärkkäinen <pasik@xxxxxx>
Subject: Re: [Xen-users] iscsi vs nfs for xen VMs
From: Bart Coninckx <bart.coninckx@xxxxxxxxxx>
Date: Sat, 29 Jan 2011 18:12:34 +0100
Cc: James Harper <james.harper@xxxxxxxxxxxxxxxx>, Adi Kriegisch <adi@xxxxxxxxxxxxxxx>, Christian Zoffoli <czoffoli@xxxxxxxxxxx>, Roberto Bifulco <roberto.bifulco2@xxxxxxxx>, xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Sat, 29 Jan 2011 09:13:45 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20110129153027.GG2754@xxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <994429490908070648s69eed40eua19efc43c3eb85a7@xxxxxxxxxxxxxx> <4D3FF9BC.40601@xxxxxxxxxxx> <sig.4007da378a.AANLkTiku=-RhcyUZVHmwnJ18+Az6Fk5CxdEjKdHQKJ54@xxxxxxxxxxxxxx> <4D4032C7.9000003@xxxxxxxxxxx> <AANLkTin+K5G10_03qLRT_yqCRELu339roLEHy1bVFoqR@xxxxxxxxxxxxxx> <4D4064CD.8010005@xxxxxxxxxxx> <AEC6C66638C05B468B556EA548C1A77D01BB9292@trantor> <20110127083537.GD29664@xxxxxxxx> <20110129150926.GF2754@xxxxxxxxxxx> <4D4431F8.7080300@xxxxxxxxxx> <20110129153027.GG2754@xxxxxxxxxxx>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.16) Gecko/20101125 SUSE/3.0.11 Thunderbird/3.0.11
On 01/29/11 16:30, Pasi Kärkkäinen wrote:
> On Sat, Jan 29, 2011 at 04:27:52PM +0100, Bart Coninckx wrote:
>   
>> On 01/29/11 16:09, Pasi Kärkkäinen wrote:
>>     
>>> On Thu, Jan 27, 2011 at 09:35:38AM +0100, Adi Kriegisch wrote:
>>>   
>>>       
>>>> Hi!
>>>>
>>>>     
>>>>         
>>>>>> iSCSI tipically has a quite big overhead due to the protocol, FC, SAS,
>>>>>> native infiniband, AoE have very low overhead.
>>>>>>
>>>>>>         
>>>>>>             
>>>>> For iSCSI vs AoE, that isn't as true as you might think. TCP offload can
>>>>> take care of a lot of the overhead. Any server class network adapter
>>>>> these days should allow you to send 60kb packets to the network adapter
>>>>> and it will take care of the segmentation, while AoE would be limited to
>>>>> MTU sized packets. With AoE you need to checksum every packet yourself
>>>>> while with iSCSI it is taken care of by the network adapter.
>>>>>       
>>>>>           
>>>> What AoE actually does is sending a frame per block. Block size is 4K -- so
>>>> no need for fragmentation. The overhead is pretty low, because we're
>>>> talking about Ethernet frames.
>>>> Most iSCSI issues I have seen are with reordering of packages due to
>>>> transmission across several interfaces. So what most people recommend is to
>>>> keep the number of interfaces to two. To keep performance up this means you
>>>> have to use 10G, FC or similar which is quite expensive -- especially if
>>>> you'd like to have a HA SAN network (HSRP and stuff like that is required).
>>>>
>>>> AoE does not suffer from those issues: Using 6 GBit interfaces is no
>>>> problem at all, load balancing will happen automatically, as the load is
>>>> distributed equally across all available interfaces. HA is very simple:
>>>> just use two switches and connect one half of the interfaces to one switch
>>>> and the other half to the other switch. (It is recommended to use switches
>>>> that can do jumbo frames and flow control)
>>>> IMHO most of the current recommendations and practises surrounding iSCSI
>>>> are there to overcome the shortcomings of the protocol. AoE is way more
>>>> robust and easier to handle.
>>>>
>>>>     
>>>>         
>>> iSCSI does not have problems using multiple gige interfaces.
>>> Just setup multipathing properly.
>>>
>>> -- Pasi
>>>
>>>
>>> _______________________________________________
>>> Xen-users mailing list
>>> Xen-users@xxxxxxxxxxxxxxxxxxx
>>> http://lists.xensource.com/xen-users
>>>   
>>>       
>> On this subject: am using multipathing to iSCSI too, hoping to have
>> aggregated speed on top of path redundancy but the speed seems not to
>> surpass the one of a single interface.
>>
>> Is anyone successful at doing this?
>>
>>     
> You're benchmarking sequential/linear IO, using big blocksizes, right?
>
> Some questions:
>       - Are you using multipath round robin path policy?
>       - After how many IOs do you switch paths? You might need to lower the 
> min_ios.
>
>
> -- Pasi
>
>   
Hi Pasi,

the benchmarking was intuitively done, with just dd and bonnie++.

It is indeed rr, this is a part of my multipath.conf:

defaults {
        udev_dir                /dev
        polling_interval        10
        selector                "round-robin 0"
        path_grouping_policy    multibus
        getuid_callout          "/lib/udev/scsi_id --whitelisted
--device=/dev/%n"
        prio                    const
        path_checker            directio
        rr_min_io               100
        max_fds                 8192
        rr_weight               priorities
        failback                immediate
        no_path_retry           5
        user_friendly_names     no
}

should the "100" go down a bit?

thx,

bart


_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users