WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel][Pv-ops][PATCH] Netback multiple tasklet support

To: "Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx>
Subject: Re: [Xen-devel][Pv-ops][PATCH] Netback multiple tasklet support
From: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
Date: Fri, 27 Nov 2009 09:42:54 +0000
Cc: Fitzhardinge <jeremy@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Jeremy
Delivery-date: Fri, 27 Nov 2009 01:43:21 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <EADF0A36011179459010BDF5142A457501D006B913@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Citrix Systems, Inc.
References: <EADF0A36011179459010BDF5142A457501D006B913@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hi,

Does this change have any impact on the responsiveness of domain 0
userspace while the host is under heavy network load? We have found that
the netback tasklets can completely dominate dom0's VCPU to the point
where no userspace process ever gets the chance to run, since this
includes sshd and the management toolstack that can be quite annoying. 

The issue was probably specific to using a single-VCPU domain 0 in
XenServer but since you patch introduces a tasklet per VCPU it could
possibly happen to multi-VCPU domain 0.

For XenServer we converted the tasklets into a kernel thread, at the
cost of a small reduction in overall throughput but yielding a massive
improvement in domain 0 responsiveness. Unfortunately the change was
made by someone who has since left Citrix and I cannot locate the
numbers he left behind :-(

Our patch is attached. A netback thread per domain 0 VCPU might be
interesting to experiment with?

Ian.

On Fri, 2009-11-27 at 02:26 +0000, Xu, Dongxiao wrote:
> Current netback uses one pair of tasklets for Tx/Rx data transaction.
> Netback tasklet could only run at one CPU at a time, and it is used to
> serve all the netfronts. Therefore it has become a performance bottle
> neck. This patch is to use multiple tasklet pairs to replace the
> current single pair in dom0. 
>       Assuming that Dom0 has CPUNR VCPUs, we define CPUNR kinds of tasklets
> pair (CPUNR for Tx, and CPUNR for Rx). Each pare of tasklets serve
> specific group of netfronts. Also for those global and static
> variables, we duplicated them for each group in order to avoid the
> spinlock. 
> 
> Test senario:
> We use ten 1G NIC interface to talk with 10 VMs (netfront) in server.
> So the total bandwidth is 10G. 
> For host machine, bind each guest's netfront with each NIC interface.
> For client machine, do netperf testing with each guest.
> 
> Test Case     Packet Size     Throughput(Mbps)        Dom0 CPU Util   Guests 
> CPU Util
> w/o patch     1400            4304.30         400.33%         112.21%
> w/   patch    1400            9533.13         461.64%         243.81%
> 
> BTW, when we test this patch, we found that the domain_lock in grant
> table operation becomes a bottle neck. We temporarily remove the
> global domain_lock to achieve good performance.
>  
> Best Regards, 
> -- Dongxiao

Attachment: netback-thread
Description: Text Data

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel