WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel][Pv-ops][PATCH 3/4 v2] Netback: Multiple tasklets support

To: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Subject: RE: [Xen-devel][Pv-ops][PATCH 3/4 v2] Netback: Multiple tasklets support
From: "Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx>
Date: Tue, 4 May 2010 08:55:12 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: Fitzhardinge <jeremy@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Jeremy, Steven Smith <steven.smith@xxxxxxxxxx>
Delivery-date: Mon, 03 May 2010 17:57:57 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20100503160644.GE31299@xxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <D5AB6E638E5A3E4B8F4406B113A5A19A1D94B18A@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <20100503160644.GE31299@xxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acrq2sUNuxKtfkhBSqiJJmsWchnKcgAST9cg
Thread-topic: [Xen-devel][Pv-ops][PATCH 3/4 v2] Netback: Multiple tasklets support
Konrad Rzeszutek Wilk wrote:
> On Thu, Apr 29, 2010 at 10:28:50PM +0800, Xu, Dongxiao wrote:
>> Netback: Multiple tasklets support.
>> 
>> Now netback uses one pair of tasklets for Tx/Rx data transaction.
>> Netback tasklet could only run at one CPU at a time, and it is
>> used to serve all the netfronts. Therefore it has become a
>> performance bottle neck. This patch is to use multiple tasklet
>> pairs to replace the current single pair in dom0.
>> 
>> Assuming that Dom0 has CPUNR VCPUs, we define CPUNR kinds of
>> tasklets pair (CPUNR for Tx, and CPUNR for Rx). Each pare of
>> ^^^ -> pair 
> 
>> tasklets serve specific group of netfronts. Also for those global
>> and static variables, we duplicated them for each group in
>> order to avoid the spinlock.
> 
> scripts/checkpatch.pl --strict
> ~/0003-Netback-Multiple-tasklets-support.patch
> CHECK: spinlock_t definition without comment
> #42: FILE: drivers/xen/netback/common.h:292:
> +       spinlock_t group_operation_lock;
> 
> total: 0 errors, 0 warnings, 1 checks, 626 lines checked
> 
> /home/konrad/0003-Netback-Multiple-tasklets-support.patch has style
> problems, please review.  If any of these errors
> are false positives report them to the maintainer, see
> CHECKPATCH in MAINTAINERS.

Thanks, I will modify it in next version.

> 
> 
>> 
>> Signed-off-by: Dongxiao Xu <dongxiao.xu@xxxxxxxxx>
> 
> +static void netbk_add_netif(struct xen_netbk *netbk, int group_nr,
> +                        struct xen_netif *netif)
> +{
> +     int i;
> +     int min_netfront_count;
> +     int min_group = 0;
> +     spin_lock(&netbk->group_operation_lock);
> +     min_netfront_count = netbk[0].netfront_count;
> +     for (i = 0; i < group_nr; i++) {
> +             if (netbk[i].netfront_count < min_netfront_count) {
> +                     min_group = i;
> +                     min_netfront_count = netbk[i].netfront_count;
> 
> Should you have a 'break' here? I am not sure if it makes sense to go
> through all of the tasklets to set the min_group and
> min_netfrount_count to the last one?

To find the minimum count, I think it should go through all the tasklsets.

> 
> +             }
> +     }
> +
> +     netif->group = min_group;
> +     netbk[netif->group].netfront_count++;
> +     spin_unlock(&netbk->group_operation_lock);
> +}
> +
> +static void netbk_remove_netif(struct xen_netbk *netbk, struct
> xen_netif *netif) +{
> +     spin_lock(&netbk->group_operation_lock);
> +     netbk[netif->group].netfront_count--;
> +     spin_unlock(&netbk->group_operation_lock);
> +}
> +
>  static void __netif_up(struct xen_netif *netif)
>  {
>       enable_irq(netif->irq);
> @@ -333,6 +360,8 @@ int netif_map(struct xen_netif *netif, unsigned
>       long tx_ring_ref, if (netif->rx_comms_area == NULL)
>               goto err_rx;
> 
> +     netbk_add_netif(xen_netbk, xen_netbk_group_nr, netif);
> +
> 
> Say you have 7 VIFs and only 4 VCPUs, are these netfront_count values
> correct?
> 
> netbk[0].netfront_count == 1; /* vif0 added */
> netbk[3].netfront_count == 1; /* vif1 added */
> netbk[2].netfront_count == 1; /* vif2 added */
> netbk[1].netfront_count == 1; /* vif3 added */
> netbk[0].netfront_count == 2; /* vif4 added */
> netbk[3].netfront_count == 2; /* vif5 added */
> netbk[2].netfront_count == 2; /* vif6 added */

Basically it is true, but the order may be changed.

netbk[0].netfront_count == 1; /* vif0 added */
netbk[1].netfront_count == 1; /* vif1 added */
netbk[2].netfront_count == 1; /* vif2 added */
netbk[3].netfront_count == 1; /* vif3 added */
netbk[0].netfront_count == 2; /* vif4 added */
netbk[1].netfront_count == 2; /* vif5 added */
netbk[2].netfront_count == 2; /* vif6 added */

> 
> I just want to make sure I understand the allocation algorithm
> correctly.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>