This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel][Pv-ops][PATCH] Netback multiple tasklet support

To: Ian Pratt <Ian.Pratt@xxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel][Pv-ops][PATCH] Netback multiple tasklet support
From: "Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx>
Date: Sat, 28 Nov 2009 00:57:22 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Delivery-date: Fri, 27 Nov 2009 08:58:30 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4FA716B1526C7C4DB0375C6DADBC4EA342A7A7E951@xxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <EADF0A36011179459010BDF5142A457501D006B913@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4FA716B1526C7C4DB0375C6DADBC4EA342A7A7E951@xxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcpvCRTacBm7g/TlQ5GetSSm6xA1EAAc2xzgAAA5DEA=
Thread-topic: [Xen-devel][Pv-ops][PATCH] Netback multiple tasklet support
Ian Pratt wrote:
>> Test Case    Packet Size     Throughput(Mbps)        Dom0 CPU Util   Guests 
>> CPU
>> Util
>> w/o patch    1400            4304.30         400.33%         112.21%
>> w/   patch   1400            9533.13         461.64%         243.81%
>> BTW, when we test this patch, we found that the domain_lock in grant
>> table operation becomes a bottle neck. We temporarily remove the
>> global domain_lock to achieve good performance.
> What are the figures with the domain_lock still present? How many
> VCPUs did dom0 have (it would be good to see numbers for 2,3 and 4
> VCPUs).  
> I'd rather see use of kthreads than tasklets as this enables more
> control over QoS (I believe there are patches for this). 
> Thanks,
> Ian

The domain lock is in grant_op hypercall. If the multiple tasklets are fighting
with each other for this big domain lock, it would become a bottleneck and 
hurt the performance. 
Our test system has 16 LP in total, so we have 16 vcpus in dom0 by default.
10 of them are used to handle the network load. For our test case, dom0's total
cpu utilization is  ~461.64%,  so each vcpu ocupies ~46%. 
Actually the multiple tasklet in netback could already improve the the QoS of 
system, therefore I think it can also help to get better responseness for that 
I think I can try to write another patch which replace the tasklet by kthread, 
because I think is a different job with the multi-tasklet netback support. 
(kthread is 
used to guarantee the responseness of userspace, however multi-tasklet netback 
used to remove the dom0's cpu utilization bottleneck). However I am not sure 
whether the improvement in QoS by this change is needed In MP system? 

Xen-devel mailing list