Function pirq_guest_bind is called for
physical device IRQ. Right?
Even if event channel is bound to one VCPU
why do we need to bind physical IRQ to a particular physical CPU. VCPU is not guaranteed
to run on same physical processor anyway. So, if Xen sets interrupt affinity
for physical IRQ to all the physical processor IOAPIC will send that physical
IRQ to all physical processors in round robin manner. That should give better
interrupt latency for physical IRQs.
From: Keir Fraser [mailto:Keir.Fraser@xxxxxxxxxxxx]
Sent: Thursday, October 25, 2007
11:42 PM
To: Agarwal, Lomesh; xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-devel] problem
in setting cpumask for physical interrupt
An event channel can only be bound
to one VCPU at a time. The IRQ should be bound to the CPU that that VCPU runs
on.
-- Keir
On 26/10/07 01:36, "Agarwal, Lomesh" <lomesh.agarwal@xxxxxxxxx>
wrote:
Why does function pirq_guest_bind (in
arch/x86/irq.c) calls set_affinity with cpumask of current processor? If I
understand correctly pirq_guest_bind is called in response to guest calling
request_irq. So, if by chance all guests call request_irq on the same physical
processor then Xen may end up setting interrupt affinity to one physical
processor only.
I think Xen should set the affinity to all the processors available. VCPU is
not guranteed to run on the same physical processor on which it called request_irq
anyway.
I will send a patch if my understanding looks ok.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel