xen-devel
RE: [Xen-devel][Pv-ops][PATCH 0/3] Resend: Netback multiple thread suppo
Steven Smith wrote:
>>>> diff --git a/drivers/xen/netback/netback.c
>>>> b/drivers/xen/netback/netback.c index c24debf..a484b0a 100644 ---
>>>> a/drivers/xen/netback/netback.c +++ b/drivers/xen/netback/netback.c
>>>> @@ -49,18 +49,13 @@
>>>>
>>>> /*define NETBE_DEBUG_INTERRUPT*/
>>>>
>>>> +struct xen_netbk *xen_netbk;
>>>>
>>>
>>>> +int group_nr = 1;
>>>> +struct page_foreign_tracker *foreign_page_tracker;
>>>>
>>> I think these two would benefit from more descriptive names, given
>>> that they're not static.
>>
>> Yes. Actually I thought I raised the same points the first time
>> through and Dongxiao had posted patches addressing them. I have to
>> admit I haven't looked at the reposted patches in detail yet. Have
>> we suffered a regression here?
>>
>> Hm, maybe its just this issue which slipped through.
> I think so, yes (assuming the patches posted on the 26th of April are
> the most recent version).
>
>>> Apart from that, it all looks fine to me.
>> Thanks for looking at this. It had been missing the gaze of some
>> networking-savvy eyes.
> There is one other potential issue which just occurred to me. These
> patches assign netifs to groups pretty much arbitrarily, beyond trying
> to keep the groups balanced. It might be better to try to group
> interfaces so that the tasklet runs on the same VCPU as the interrupt
> i.e. grouping interfaces according to interrupt affinity. That would
> have two main benefits:
>
> -- Less cross-VCPU traffic, and hence better cache etc. behaviour.
> -- Potentially better balancing. If you find that you've accidentally
> assigned two high-traffic interfaces to the same group, irqbalance
> or whatnot should rebalance the interrupts to different vcpus, but
> that doesn't automatically do us much good because most of the work
> is done in the tasklet (which will still only run on one vcpu and
> hence become a bottleneck). If we rebalanced the netif groups when
> irqbalance rebalanced the interrupts then we'd bypass the issue.
>
> Of course, someone would need to go and implement the
> rebalance-in-response-to-irqbalance, which would be non-trivial.
Your idea is workable if the netfront is bound with a single queue
NIC via a bridge. Hence we know which interrupt is used to serve the
netfront, and then we can group netfronts according to the interrupt
affinity. And as you said, the effort is non-trivial.
However if the NIC is multi-queued, which has only one interface but
multiple interrupt queues. All netfronts are bounded with this interface
via one bridge. We have no idea which interrupt queue is serving for
a specified netfront. So the rebalance according to interrupt affinity
is a challenge. Do you have idea on this point?
Thanks,
Dongxiao
>
> You could imagine doing it by just getting rid of the explicit group
> field in struct xen_netif and using smp_processor_id() instead, but
> that would need quite a bit of extra thought about what happens if
> e.g. the start_xmit and the tx complete interrupt happen on different
> vcpus.
>
> It sounds like the patch provides a useful improvement as it stands,
> and the rebalancing would probably be a bit of a pain, so I don't
> think this is a blocker to an immediate merge, but it'd be nice if
> someone could look at it eventually.
>
> Steven.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|