WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel][Pv-ops][PATCH 0/3] Resend: Netback multiple thread suppo

To: "Xu, Dongxiao" <dongxiao.xu@xxxxxxxxx>
Subject: Re: [Xen-devel][Pv-ops][PATCH 0/3] Resend: Netback multiple thread support
From: Steven Smith <steven.smith@xxxxxxxxxx>
Date: Wed, 28 Apr 2010 13:04:20 +0100
Cc: Steven Smith <Steven.Smith@xxxxxxxxxxxxx>, Jeremy Fitzhardinge <jeremy@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Wed, 28 Apr 2010 05:05:15 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <D5AB6E638E5A3E4B8F4406B113A5A19A1D8CC910@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4B187513.80003@xxxxxxxx> <EADF0A36011179459010BDF5142A457501D13FDE62@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4B200727.8040000@xxxxxxxx> <EADF0A36011179459010BDF5142A457501D13FE3BB@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4B213766.7030201@xxxxxxxx> <D5AB6E638E5A3E4B8F4406B113A5A19A1D8CC03F@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <20100427104925.GA14523@xxxxxxxxxxxxxxxxxxxxxxxxxx> <4BD72ED4.5060409@xxxxxxxx> <20100428093108.GA17066@xxxxxxxxxxxxxxxxxxxxxxxxxx> <D5AB6E638E5A3E4B8F4406B113A5A19A1D8CC910@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
> >>> Apart from that, it all looks fine to me.
> >> Thanks for looking at this.  It had been missing the gaze of some
> >> networking-savvy eyes.
> > There is one other potential issue which just occurred to me.  These
> > patches assign netifs to groups pretty much arbitrarily, beyond trying
> > to keep the groups balanced.  It might be better to try to group
> > interfaces so that the tasklet runs on the same VCPU as the interrupt
> > i.e. grouping interfaces according to interrupt affinity.  That would
> > have two main benefits:
> > 
> > -- Less cross-VCPU traffic, and hence better cache etc. behaviour.
> > -- Potentially better balancing.  If you find that you've accidentally
> >    assigned two high-traffic interfaces to the same group, irqbalance
> >    or whatnot should rebalance the interrupts to different vcpus, but
> >    that doesn't automatically do us much good because most of the work
> >    is done in the tasklet (which will still only run on one vcpu and
> >    hence become a bottleneck).  If we rebalanced the netif groups when
> >    irqbalance rebalanced the interrupts then we'd bypass the issue.
> > 
> > Of course, someone would need to go and implement the
> > rebalance-in-response-to-irqbalance, which would be non-trivial.
> Your idea is workable if the netfront is bound with a single queue
> NIC via a bridge. Hence we know which interrupt is used to serve the
> netfront, and then we can group netfronts according to the interrupt
> affinity. And as you said, the effort is non-trivial.
> 
> However if the NIC is multi-queued, which has only one interface but
> multiple interrupt queues. All netfronts are bounded with this interface
> via one bridge. We have no idea which interrupt queue is serving for
> a specified netfront. So the rebalance according to interrupt affinity
> is a challenge. Do you have idea on this point?
Sorry, I should have been clearer here.  When I said ``interrupt'' I
meant the event channel interrupt which the netfront instance will use
to notify netback, not the physical hardware interrupt of whatever
physical NIC is ultimately associated with it.  We should always know
which event channel a given netfront is using, and hence which
interrupt, and so we should be able to find out its affinity.  In
effect, we'd rebalance in response to messages from the guest to
netback, which is at least vaguely reasonable as a proxy for actual
load.

There are at least three relevant contexts here:

-- Interrupts generated by the hardware
-- The netback tasklets
-- Interrupts generated by the guest

As you say, doing anything based on where hardware interrupts are
being delivered is somewhere between hard an impossible, but it might
be possible to do something useful with the interrupts from the guest.

Steven.

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>