WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel][Pv-ops][PATCH 0/3] Resend: Netback multiple thread suppo

To: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Subject: RE: [Xen-devel][Pv-ops][PATCH 0/3] Resend: Netback multiple thread support
From: "Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx>
Date: Tue, 27 Apr 2010 11:02:57 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Delivery-date: Mon, 26 Apr 2010 20:04:50 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <D5AB6E638E5A3E4B8F4406B113A5A19A1D8CC03F@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <EADF0A36011179459010BDF5142A457501D006B913@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4FA716B1526C7C4DB0375C6DADBC4EA342A7A7E951@xxxxxxxxxxxxxxxxxxxxxxxxx> <EADF0A36011179459010BDF5142A457501D006BBAC@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4FA716B1526C7C4DB0375C6DADBC4EA342A7A7E95E@xxxxxxxxxxxxxxxxxxxxxxxxx> <EADF0A36011179459010BDF5142A457501D11C1BE3@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4B182D87.6030901@xxxxxxxx> <EADF0A36011179459010BDF5142A457501D11C20F8@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4B187513.80003@xxxxxxxx> <EADF0A36011179459010BDF5142A457501D13FDE62@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4B200727.8040000@xxxxxxxx> <EADF0A36011179459010BDF5142A457501D13FE3BB@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4B213766.7030201@xxxxxxxx> <D5AB6E638E5A3E4B8F4406B113A5A19A1D8CC03F@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acp5wtIj2FEaRUhrTeu5BTx4BII6ARrhMAHgABtzQmA=
Thread-topic: [Xen-devel][Pv-ops][PATCH 0/3] Resend: Netback multiple thread support
Xu, Dongxiao wrote:
> Hi Jeremy and all,
> 
> I'd like to make an update on these patches. The main logic is not
> changed, and I only did a rebase towards the upstream pv-ops kernel.
> See attached patch. The original patches are checked in Jeremy's
> netback-tasklet branch.
> 
> Let me explain the main idea of the patchset again:
> 
> Current netback uses one pair of tasklets for Tx/Rx data transaction.
> Netback tasklet could only run at one CPU at a time, and it is used to
> serve all the netfronts. Therefore it has become a performance bottle
> neck. This patch is to use multiple tasklet pairs to replace the
> current single pair in dom0.
> 
> Assuming that Dom0 has CPUNR VCPUs, we define CPUNR kinds of
> tasklets pair (CPUNR for Tx, and CPUNR for Rx). Each pare of tasklets
> serve specific group of netfronts. Also for those global and static
> variables, we duplicated them for each group in order to avoid the
> spinlock.
> 
> PATCH 01: Generilize static/global variables into 'struct xen_netbk'.
> 
> PATCH 02: Multiple tasklets support.
> 
> PATCH 03: Use Kernel thread to replace the tasklet.
> 
> Recently I re-tested the patchset with Intel 10G multi-queue NIC
> device, and use 10 outside 1G NICs to do netperf tests with that 10G
> NIC. 

Here are more descriptions about the test:

In the host side, we launch 10 HVM guests, each is installed with PV
VNIF driver, all the vif interfaces are bound with a 10G NIC through
bridge. So the 10 guests are sharing the 10G bandwidth.

Outside the host, we use 10 1G NIC interfaces to do netperf test with
the 10 HVM guests. 

Thanks,
Dongxiao

> 
> Case 1: Dom0 has more than 10 vcpus pinned with each physical CPU.
> With the patchset, the performance is 2x of the original throughput.
> 
> Case 2: Dom0 has 4 vcpus pinned with 4 physical CPUs.
> With the patchset, the performance is 3.7x of the original throughput.
> 
> when we test this patch, we found that the domain_lock in grant table
> operation (gnttab_copy()) becomes a bottle neck. We temporarily
> remove the global domain_lock to achieve good performance.
> 
> Thanks,
> Dongxiao
> 
> Jeremy Fitzhardinge wrote:
>> On 12/09/09 19:29, Xu, Dongxiao wrote:
>>>> Also, is it worth making it a tunable?  Presumably it needn't scale
>>>> exactly with the number of dom0 cpus; if you only have one or two
>>>> gbit interfaces, then you could saturate that pretty quickly with a
>>>> small number of cpus, regardless of how many domains you have.
>>>> 
>>> How many CPUs are serving for the NIC interface is determined by how
>>> interrupt is delivered. If system only has two gbit interfaces, and
>>> they delivier interrupts to CPU0 and CPU1, then the case is: two
>>> CPUs handle two tasklets. Other CPUs are idle. The group_nr just
>>> defines the max number of tasklets, however it doesn't decide how
>>> tasklet is handled by CPU. 
>>> 
>> 
>> So does this mean that a given vcpu will be used to handle the
>> interrupt if happens to be running on a pcpu with affinity for the
>> device?  Or that particular devices will be handled by particular
>> vcpus? 
>> 
>>>> I've pushed this out in its own branch:
>>>> xen/dom0/backend/netback-tasklet; please post any future patches
>>>> against this branch. 
>>>> 
>>> What's my next step for this netback-tasklet tree merging into
>>> xen/master? 
>>> 
>> 
>> Hm, well, I guess:
>> 
>>     * I'd like to see some comments Keir/Ian(s)/others that this is
>>       basically the right approach.  It looks OK to me, but I don't
>>       have much experience with performance in the field.
>>           o does nc2 make nc1 obsolete?
>>     * Testing to make sure it really works.  Netback is clearly
>>       critical functionality, so I'd like to be sure we're not
>>       introducing big regressions
>> 
>>      J
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel