Konrad Rzeszutek Wilk wrote:
> On Thu, Apr 29, 2010 at 10:28:50PM +0800, Xu, Dongxiao wrote:
>> Netback: Multiple tasklets support.
>>
>> Now netback uses one pair of tasklets for Tx/Rx data transaction.
>> Netback tasklet could only run at one CPU at a time, and it is
>> used to serve all the netfronts. Therefore it has become a
>> performance bottle neck. This patch is to use multiple tasklet
>> pairs to replace the current single pair in dom0.
>>
>> Assuming that Dom0 has CPUNR VCPUs, we define CPUNR kinds of
>> tasklets pair (CPUNR for Tx, and CPUNR for Rx). Each pare of
>> ^^^ -> pair
>
>> tasklets serve specific group of netfronts. Also for those global
>> and static variables, we duplicated them for each group in
>> order to avoid the spinlock.
>
> scripts/checkpatch.pl --strict
> ~/0003-Netback-Multiple-tasklets-support.patch
> CHECK: spinlock_t definition without comment
> #42: FILE: drivers/xen/netback/common.h:292:
> + spinlock_t group_operation_lock;
>
> total: 0 errors, 0 warnings, 1 checks, 626 lines checked
>
> /home/konrad/0003-Netback-Multiple-tasklets-support.patch has style
> problems, please review. If any of these errors
> are false positives report them to the maintainer, see
> CHECKPATCH in MAINTAINERS.
Thanks, I will modify it in next version.
>
>
>>
>> Signed-off-by: Dongxiao Xu <dongxiao.xu@xxxxxxxxx>
>
> +static void netbk_add_netif(struct xen_netbk *netbk, int group_nr,
> + struct xen_netif *netif)
> +{
> + int i;
> + int min_netfront_count;
> + int min_group = 0;
> + spin_lock(&netbk->group_operation_lock);
> + min_netfront_count = netbk[0].netfront_count;
> + for (i = 0; i < group_nr; i++) {
> + if (netbk[i].netfront_count < min_netfront_count) {
> + min_group = i;
> + min_netfront_count = netbk[i].netfront_count;
>
> Should you have a 'break' here? I am not sure if it makes sense to go
> through all of the tasklets to set the min_group and
> min_netfrount_count to the last one?
To find the minimum count, I think it should go through all the tasklsets.
>
> + }
> + }
> +
> + netif->group = min_group;
> + netbk[netif->group].netfront_count++;
> + spin_unlock(&netbk->group_operation_lock);
> +}
> +
> +static void netbk_remove_netif(struct xen_netbk *netbk, struct
> xen_netif *netif) +{
> + spin_lock(&netbk->group_operation_lock);
> + netbk[netif->group].netfront_count--;
> + spin_unlock(&netbk->group_operation_lock);
> +}
> +
> static void __netif_up(struct xen_netif *netif)
> {
> enable_irq(netif->irq);
> @@ -333,6 +360,8 @@ int netif_map(struct xen_netif *netif, unsigned
> long tx_ring_ref, if (netif->rx_comms_area == NULL)
> goto err_rx;
>
> + netbk_add_netif(xen_netbk, xen_netbk_group_nr, netif);
> +
>
> Say you have 7 VIFs and only 4 VCPUs, are these netfront_count values
> correct?
>
> netbk[0].netfront_count == 1; /* vif0 added */
> netbk[3].netfront_count == 1; /* vif1 added */
> netbk[2].netfront_count == 1; /* vif2 added */
> netbk[1].netfront_count == 1; /* vif3 added */
> netbk[0].netfront_count == 2; /* vif4 added */
> netbk[3].netfront_count == 2; /* vif5 added */
> netbk[2].netfront_count == 2; /* vif6 added */
Basically it is true, but the order may be changed.
netbk[0].netfront_count == 1; /* vif0 added */
netbk[1].netfront_count == 1; /* vif1 added */
netbk[2].netfront_count == 1; /* vif2 added */
netbk[3].netfront_count == 1; /* vif3 added */
netbk[0].netfront_count == 2; /* vif4 added */
netbk[1].netfront_count == 2; /* vif5 added */
netbk[2].netfront_count == 2; /* vif6 added */
>
> I just want to make sure I understand the allocation algorithm
> correctly.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|