|
|
|
|
|
|
|
|
|
|
xen-devel
RE: [Xen-devel][Pv-ops][PATCH] Netback multiple tasklet support
Ian Pratt wrote:
>> Test Case Packet Size Throughput(Mbps) Dom0 CPU Util Guests
>> CPU
>> Util
>> w/o patch 1400 4304.30 400.33% 112.21%
>> w/ patch 1400 9533.13 461.64% 243.81%
>>
>> BTW, when we test this patch, we found that the domain_lock in grant
>> table operation becomes a bottle neck. We temporarily remove the
>> global domain_lock to achieve good performance.
>
> What are the figures with the domain_lock still present? How many
> VCPUs did dom0 have (it would be good to see numbers for 2,3 and 4
> VCPUs).
>
> I'd rather see use of kthreads than tasklets as this enables more
> control over QoS (I believe there are patches for this).
>
> Thanks,
> Ian
The domain lock is in grant_op hypercall. If the multiple tasklets are fighting
with each other for this big domain lock, it would become a bottleneck and
hurt the performance.
Our test system has 16 LP in total, so we have 16 vcpus in dom0 by default.
10 of them are used to handle the network load. For our test case, dom0's total
cpu utilization is ~461.64%, so each vcpu ocupies ~46%.
Actually the multiple tasklet in netback could already improve the the QoS of
the
system, therefore I think it can also help to get better responseness for that
vcpu.
I think I can try to write another patch which replace the tasklet by kthread,
because I think is a different job with the multi-tasklet netback support.
(kthread is
used to guarantee the responseness of userspace, however multi-tasklet netback
is
used to remove the dom0's cpu utilization bottleneck). However I am not sure
whether the improvement in QoS by this change is needed In MP system?
Thanks!
Dongxiao
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|