WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel][Pv-ops][PATCH 0/3] Resend: Netback multiple thread suppo

To: Steven Smith <steven.smith@xxxxxxxxxx>, Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Subject: RE: [Xen-devel][Pv-ops][PATCH 0/3] Resend: Netback multiple thread support
From: "Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx>
Date: Wed, 28 Apr 2010 19:36:40 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: Steven Smith <Steven.Smith@xxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 28 Apr 2010 04:37:38 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20100428093108.GA17066@xxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4B182D87.6030901@xxxxxxxx> <EADF0A36011179459010BDF5142A457501D11C20F8@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4B187513.80003@xxxxxxxx> <EADF0A36011179459010BDF5142A457501D13FDE62@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4B200727.8040000@xxxxxxxx> <EADF0A36011179459010BDF5142A457501D13FE3BB@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4B213766.7030201@xxxxxxxx> <D5AB6E638E5A3E4B8F4406B113A5A19A1D8CC03F@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <20100427104925.GA14523@xxxxxxxxxxxxxxxxxxxxxxxxxx> <4BD72ED4.5060409@xxxxxxxx> <20100428093108.GA17066@xxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcrmtYw657jUs5p5TseyzC4DA65oGwADAXdg
Thread-topic: [Xen-devel][Pv-ops][PATCH 0/3] Resend: Netback multiple thread support
Steven Smith wrote:
>>>> diff --git a/drivers/xen/netback/netback.c
>>>> b/drivers/xen/netback/netback.c index c24debf..a484b0a 100644 ---
>>>> a/drivers/xen/netback/netback.c +++ b/drivers/xen/netback/netback.c
>>>> @@ -49,18 +49,13 @@
>>>> 
>>>>  /*define NETBE_DEBUG_INTERRUPT*/
>>>> 
>>>> +struct xen_netbk *xen_netbk;
>>>> 
>>> 
>>>> +int group_nr = 1;
>>>> +struct page_foreign_tracker *foreign_page_tracker;
>>>> 
>>> I think these two would benefit from more descriptive names, given
>>> that they're not static.
>> 
>> Yes.  Actually I thought I raised the same points the first time
>> through and Dongxiao had posted patches addressing them.  I have to
>> admit I haven't looked at the reposted patches in detail yet.  Have
>> we suffered a regression here? 
>> 
>> Hm, maybe its just this issue which slipped through.
> I think so, yes (assuming the patches posted on the 26th of April are
> the most recent version).
> 
>>> Apart from that, it all looks fine to me.
>> Thanks for looking at this.  It had been missing the gaze of some
>> networking-savvy eyes.
> There is one other potential issue which just occurred to me.  These
> patches assign netifs to groups pretty much arbitrarily, beyond trying
> to keep the groups balanced.  It might be better to try to group
> interfaces so that the tasklet runs on the same VCPU as the interrupt
> i.e. grouping interfaces according to interrupt affinity.  That would
> have two main benefits:
> 
> -- Less cross-VCPU traffic, and hence better cache etc. behaviour.
> -- Potentially better balancing.  If you find that you've accidentally
>    assigned two high-traffic interfaces to the same group, irqbalance
>    or whatnot should rebalance the interrupts to different vcpus, but
>    that doesn't automatically do us much good because most of the work
>    is done in the tasklet (which will still only run on one vcpu and
>    hence become a bottleneck).  If we rebalanced the netif groups when
>    irqbalance rebalanced the interrupts then we'd bypass the issue.
> 
> Of course, someone would need to go and implement the
> rebalance-in-response-to-irqbalance, which would be non-trivial.

Your idea is workable if the netfront is bound with a single queue
NIC via a bridge. Hence we know which interrupt is used to serve the
netfront, and then we can group netfronts according to the interrupt
affinity. And as you said, the effort is non-trivial.

However if the NIC is multi-queued, which has only one interface but
multiple interrupt queues. All netfronts are bounded with this interface
via one bridge. We have no idea which interrupt queue is serving for
a specified netfront. So the rebalance according to interrupt affinity
is a challenge. Do you have idea on this point?

Thanks,
Dongxiao

> 
> You could imagine doing it by just getting rid of the explicit group
> field in struct xen_netif and using smp_processor_id() instead, but
> that would need quite a bit of extra thought about what happens if
> e.g. the start_xmit and the tx complete interrupt happen on different
> vcpus.
> 
> It sounds like the patch provides a useful improvement as it stands,
> and the rebalancing would probably be a bit of a pain, so I don't
> think this is a blocker to an immediate merge, but it'd be nice if
> someone could look at it eventually.
> 
> Steven.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>