This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Re: [PATCH] xen network backend driver

On Wed, 2011-01-19 at 21:28 +0200, Pasi Kärkkäinen wrote:
> On Wed, Jan 19, 2011 at 11:16:59AM -0800, Jeremy Fitzhardinge wrote:
> > On 01/19/2011 10:05 AM, Ben Hutchings wrote:
> > > Not in itself.  NAPI polling will run on the same CPU which scheduled it
> > > (so wherever the IRQ was initially handled).  If the protocol used
> > > between netfront and netback doesn't support RSS then RPS
> > > <http://lwn.net/Articles/362339/> can be used to spread the RX work
> > > across CPUs.
> > 
> > There's only one irq per netback which is bound to one (V)CPU at a
> > time.  I guess we could extend it to have multiple irqs per netback and
> > some way of distributing packet flows over them, but that would only
> > really make sense if there's a single interface with much more traffic
> > than the others; otherwise the interrupts should be fairly well
> > distributed (assuming that the different netback irqs are routed to
> > different cpus).
> > 
> Does "multiqueue" only work for NIC drivers (and frontend drivers),
> or could it be used also for netback?

Netfront and netback would have to agree on how many queues to use in
each direction.

> (afaik Linux multiqueue enables setting up multiple receive queues
> each having a separate irq.)

In the context of Linux networking, 'multiqueue' generally refers to use
of multiple *transmit* queues.  The networking core handles scheduling
and locking of each transmit queue, so it had to be extended to support
multiple queues - initially done in 2.6.23, then made scalable in

It was possible to use multiple receive queues per device long before
this since the networking core is not involved in locking them.  (Though
it did require some hacks to create multiple NAPI contexts, before
2.6.24.)  This is mostly useful useful in conjunction with separate IRQs
per RX queue, spread across multiple CPUs (sometimes referred to as
Receive Side Scaling or RSS).


Ben Hutchings, Senior Software Engineer, Solarflare Communications
Not speaking for my employer; that's the marketing department's job.
They asked us to note that Solarflare product names are trademarked.

Xen-devel mailing list