WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel][Pv-ops][PATCH 0/3] Resend: Netback multiple thread suppo

To: Steven Smith <steven.smith@xxxxxxxxxx>
Subject: RE: [Xen-devel][Pv-ops][PATCH 0/3] Resend: Netback multiple thread support
From: "Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx>
Date: Wed, 28 Apr 2010 21:33:11 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: Steven Smith <Steven.Smith@xxxxxxxxxxxxx>, Jeremy Fitzhardinge <jeremy@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 28 Apr 2010 06:34:48 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20100428120420.GA17571@xxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4B187513.80003@xxxxxxxx> <EADF0A36011179459010BDF5142A457501D13FDE62@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4B200727.8040000@xxxxxxxx> <EADF0A36011179459010BDF5142A457501D13FE3BB@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4B213766.7030201@xxxxxxxx> <D5AB6E638E5A3E4B8F4406B113A5A19A1D8CC03F@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <20100427104925.GA14523@xxxxxxxxxxxxxxxxxxxxxxxxxx> <4BD72ED4.5060409@xxxxxxxx> <20100428093108.GA17066@xxxxxxxxxxxxxxxxxxxxxxxxxx> <D5AB6E638E5A3E4B8F4406B113A5A19A1D8CC910@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <20100428120420.GA17571@xxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcrmyvMiu1cvX56gTZ6IDcuxXb2tkgACUk/Q
Thread-topic: [Xen-devel][Pv-ops][PATCH 0/3] Resend: Netback multiple thread support
Steven Smith wrote:
>>>>> Apart from that, it all looks fine to me.
>>>> Thanks for looking at this.  It had been missing the gaze of some
>>>> networking-savvy eyes.
>>> There is one other potential issue which just occurred to me.  These
>>> patches assign netifs to groups pretty much arbitrarily, beyond
>>> trying to keep the groups balanced.  It might be better to try to
>>> group interfaces so that the tasklet runs on the same VCPU as the
>>> interrupt i.e. grouping interfaces according to interrupt affinity.
>>> That would have two main benefits: 
>>> 
>>> -- Less cross-VCPU traffic, and hence better cache etc. behaviour.
>>> -- Potentially better balancing.  If you find that you've
>>>    accidentally assigned two high-traffic interfaces to the same
>>>    group, irqbalance or whatnot should rebalance the interrupts to
>>>    different vcpus, but that doesn't automatically do us much good
>>>    because most of the work is done in the tasklet (which will
>>>    still only run on one vcpu and hence become a bottleneck).  If
>>>    we rebalanced the netif groups when irqbalance rebalanced the
>>> interrupts then we'd bypass the issue. 
>>> 
>>> Of course, someone would need to go and implement the
>>> rebalance-in-response-to-irqbalance, which would be non-trivial.
>> Your idea is workable if the netfront is bound with a single queue
>> NIC via a bridge. Hence we know which interrupt is used to serve the
>> netfront, and then we can group netfronts according to the interrupt
>> affinity. And as you said, the effort is non-trivial.
>> 
>> However if the NIC is multi-queued, which has only one interface but
>> multiple interrupt queues. All netfronts are bounded with this
>> interface via one bridge. We have no idea which interrupt queue is
>> serving for 
>> a specified netfront. So the rebalance according to interrupt
>> affinity is a challenge. Do you have idea on this point?
> Sorry, I should have been clearer here.  When I said ``interrupt'' I
> meant the event channel interrupt which the netfront instance will use
> to notify netback, not the physical hardware interrupt of whatever
> physical NIC is ultimately associated with it.  We should always know
> which event channel a given netfront is using, and hence which
> interrupt, and so we should be able to find out its affinity.  In
> effect, we'd rebalance in response to messages from the guest to
> netback, which is at least vaguely reasonable as a proxy for actual
> load.

OK, I understand, what you were thinking about is on netfront TX,
while I was talking about the netfront RX.

In my solution, each tasklet PAIR will be assigned to a group. So I think
the optimization should work for both directions.

As we have a common view that rebalance on RX rebalance is hard to
implement, and the optimization point is on TX rebalance. Do you think
if TX rebalance would have side effect on RX direction?

However in my next version of patch, I would not include this logic
since the change is not small and needs more effort.

Thanks,
Dongxiao

> 
> There are at least three relevant contexts here:
> 
> -- Interrupts generated by the hardware
> -- The netback tasklets
> -- Interrupts generated by the guest
> 
> As you say, doing anything based on where hardware interrupts are
> being delivered is somewhere between hard an impossible, but it might
> be possible to do something useful with the interrupts from the guest.
> 
> Steven.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>