WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel][Pv-ops][PATCH 0/4 v4] Netback multiple threads support

To: "Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx>
Subject: Re: [Xen-devel][Pv-ops][PATCH 0/4 v4] Netback multiple threads support
From: Steven Smith <steven.smith@xxxxxxxxxx>
Date: Fri, 21 May 2010 18:34:16 +0100
Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Dulloor <dulloor@xxxxxxxxx>, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxxxx>, Steven Smith <Steven.Smith@xxxxxxxxxxxxx>
Delivery-date: Fri, 21 May 2010 10:35:15 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <D5AB6E638E5A3E4B8F4406B113A5A19A1E6E8E19@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <D5AB6E638E5A3E4B8F4406B113A5A19A1E6E8E19@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
> Hi Steven and Jan,
> 
> I modified the code according to your comments, and the latest
> version is version 4.  Do you have further comments or consideration
> on this version?
No, that all looks fine to me.

Sorry about the delay in replying; I thought I'd already responded,
but I seem to have dropped it on the floor somewhere.

Steven.


> Xu, Dongxiao wrote:
> > Hi,
> > 
> > Do you have comments on this version of patch?
> > 
> > Thanks,
> > Dongxiao
> > 
> > Xu, Dongxiao wrote:
> >> This is netback multithread support patchset version 4.
> >> 
> >> Main Changes from v3:
> >> 1. Patchset is against xen/next tree.
> >> 2. Merge group and idx into netif->mapping.
> >> 3. Use vmalloc to allocate netbk structures.
> >> 
> >> Main Changes from v2:
> >> 1. Merge "group" and "idx" into "netif->mapping", therefore
> >> page_ext is not used now.
> >> 2. Put netbk_add_netif() and netbk_remove_netif() into
> >> __netif_up() and __netif_down().
> >> 3. Change the usage of kthread_should_stop().
> >> 4. Use __get_free_pages() to replace kzalloc().
> >> 5. Modify the changes to netif_be_dbg().
> >> 6. Use MODPARM_netback_kthread to determine whether using
> >> tasklet or kernel thread.
> >> 7. Put small fields in the front, and large arrays in the end of
> >> struct xen_netbk. 
> >> 8. Add more checks in netif_page_release().
> >> 
> >> Current netback uses one pair of tasklets for Tx/Rx data transaction.
> >> Netback tasklet could only run at one CPU at a time, and it is used
> >> to serve all the netfronts. Therefore it has become a performance
> >> bottle neck. This patch is to use multiple tasklet pairs to replace
> >> the current single pair in dom0. 
> >> 
> >> Assuming that Dom0 has CPUNR VCPUs, we define CPUNR kinds of
> >> tasklets pair (CPUNR for Tx, and CPUNR for Rx). Each pare of tasklets
> >> serve specific group of netfronts. Also for those global and static
> >> variables, we duplicated them for each group in order to avoid the
> >> spinlock. 
> >> 
> >> PATCH 01: Generilize static/global variables into 'struct xen_netbk'.
> >> 
> >> PATCH 02: Introduce a new struct type page_ext.
> >> 
> >> PATCH 03: Multiple tasklets support.
> >> 
> >> PATCH 04: Use Kernel thread to replace the tasklet.
> >> 
> >> Recently I re-tested the patchset with Intel 10G multi-queue NIC
> >> device, and use 10 outside 1G NICs to do netperf tests with that 10G
> >> NIC. 
> >> 
> >> Case 1: Dom0 has more than 10 vcpus pinned with each physical CPU.
> >> With the patchset, the performance is 2x of the original throughput.
> >> 
> >> Case 2: Dom0 has 4 vcpus pinned with 4 physical CPUs.
> >> With the patchset, the performance is 3.7x of the original
> >> throughput. 
> >> 
> >> when we test this patch, we found that the domain_lock in grant table
> >> operation (gnttab_copy()) becomes a bottle neck. We temporarily
> >> remove the global domain_lock to achieve good performance.
> 

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel