|
|
|
|
|
|
|
|
|
|
xen-devel
RE: [Xen-devel] interrupt affinity question
The dma_msi_* stuff in intel-iommu.c is not related to
this. It looks like an area that needs to be cleaned up a
bit.
The call to request_irq() is for setting up vt-d fault
handler - linking vector with iommu_page_fault(). It is only used when
there is a iommu page fault which should not happen if everything is setup
correctly.
Passthru device interrupt handling is via
do_IRQ->do_IRQ_guest->hvm_do_IRQ_dpci path. The ioapic programming
for the passthru device was originally setup by the dom0 pci driver. The
interrupt of the passthru device always gets handled by xen first and
then gets re-inject to the guest via virtual ioapic/lapic
models.
There is a interrupt latency between the
point where physical interrupt occurs and the point virtual interrupt
interrupt is injected to the guest - especially if guest's vcpu is not
running. We are still investigating on how to lower this
latency.
Allen
From looking at the code it looks
like that interrupt affinity will be set for all physical IRQs and it will be
set to the physical processor on which VCPU is running which called
request_irq.
Can somebody confirm my
understanding?
Pirq_guest_bind (in
arch/x86/irq.c) calls set_affinity (which will translate to
dma_msi_set_affinity function in arch/x86/hvm/vmx/vtd/intel-iommu.c for
VTd).
So that means if request_irq for
NIC interrupt is called when a domain with single VCPU is scheduled on
physical CPU 1 then NIC interrupt will be bind to physical CPU 1 and later if
the same domain is scheduled to physical CPU 0 it won’t get the interrupt
until it does a VMEXIT.
So for lower interrupt latency we
are should pin domain VCPU
also.
|
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|