|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] Question Also regarding interrupt balancing
Hi Keir,
I had tried the following experiment on a 4-way machine:
1) Pin the physical interrupts for the physical nic to pcpu1
2) Pin the domU to pcpu3
And ran a quick netperf test. Noticed that the cpu utilization was
around ~50% on pcpu0 although my interrupts were being pinned to pcpu2
and domU on pcpu3. That is when I noticed that vif#id.0 has a dynamic
irq which is serviced by pcpu0. Does this irq always run on pcpu0?
Considering that it is dynamic, I understand that we cannot change the
affinity and so am wondering if there some other configuration related
to it.
Any suggestions/help would be great.
Thanks,
harish
On 6/13/06, Keir Fraser <Keir.Fraser@xxxxxxxxxxxx> wrote:
On 13 Jun 2006, at 00:42, harish wrote:
> echo 2 > /proc/irq/20/smp_affinity [...works..] > echo 4 > /proc/irq/20/smp_affinity [...works..] > echo 8 > /proc/irq/20/smp_affinity [...works..]
> > But, a cumulative does not work...meaning... > echo 3> > echo 5> > echo f> etc.... do not work. > > Is that a bug or is it by design?
You should find it locks onto the first CPU in the mask that you
specify. As I said, the kernel does not load-balance IRQs so it currently does not make sense to specify multi-cpu cpumasks. So this is by design, for now.
-- Keir
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|