|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH 04/12] x86/irq: set accurate cpu_mask for high priority vectors used by external interrupts
Setting the irq descriptor target CPU mask of high priority interrupts to
contain all online CPUs is not accurate. External interrupts are
exclusively delivered using physical destination mode, and hence can only
target a single CPU. Setting the descriptor CPU mask to contain all online
CPUs makes it impossible for Xen to figure out which CPU the interrupt is
really targeting.
Instead handle high priority vectors used by external interrupts similarly
to normal vectors, keeping the target CPU mask accurate. Introduce
specific code in _assign_irq_vector() to deal with moving high priority
vectors across CPUs, this is needed at least for fixup_irqs() to be able to
evacuate those if the target CPU goes offline.
Fixes: fc0c3fa2ad5c ("x86/IO-APIC: fix setup of Xen internally used IRQs (take
2)")
Signed-off-by: Roger Pau Monné <roger.pau@xxxxxxxxxx>
---
xen/arch/x86/irq.c | 24 ++++++++++++++++++------
xen/arch/x86/smpboot.c | 3 ++-
2 files changed, 20 insertions(+), 7 deletions(-)
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index 7009a3c6d0dd..5cd934ea2a32 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -547,6 +547,20 @@ static int _assign_irq_vector(struct irq_desc *desc, const
cpumask_t *mask)
cpumask_t tmp_mask;
cpumask_and(&tmp_mask, mask, &cpu_online_map);
+
+ /*
+ * High priority vectors are reserved on all CPUs, hence moving them
+ * just requires changing the target CPU. There's no need for vector
+ * allocation on the destination.
+ */
+ if ( old_vector >= FIRST_HIPRIORITY_VECTOR &&
+ old_vector <= LAST_HIPRIORITY_VECTOR )
+ {
+ cpumask_copy(desc->arch.cpu_mask,
+ cpumask_of(cpumask_any(&tmp_mask)));
+ return 0;
+ }
+
if (cpumask_intersects(&tmp_mask, desc->arch.cpu_mask)) {
desc->arch.vector = old_vector;
return 0;
@@ -756,12 +770,10 @@ void setup_vector_irq(unsigned int cpu)
if ( !irq_desc_initialized(desc) )
continue;
vector = irq_to_vector(irq);
- if ( vector >= FIRST_HIPRIORITY_VECTOR &&
- vector <= LAST_HIPRIORITY_VECTOR )
- cpumask_set_cpu(cpu, desc->arch.cpu_mask);
- else if ( !cpumask_test_cpu(cpu, desc->arch.cpu_mask) )
- continue;
- per_cpu(vector_irq, cpu)[vector] = irq;
+ if ( (vector >= FIRST_HIPRIORITY_VECTOR &&
+ vector <= LAST_HIPRIORITY_VECTOR) ||
+ cpumask_test_cpu(cpu, desc->arch.cpu_mask) )
+ per_cpu(vector_irq, cpu)[vector] = irq;
}
}
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 7fab5552335b..69cc9bbc6e2c 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -1464,7 +1464,8 @@ void __init smp_intr_init(void)
desc = irq_to_desc(irq);
desc->arch.vector = vector;
- cpumask_copy(desc->arch.cpu_mask, &cpu_online_map);
+ cpumask_copy(desc->arch.cpu_mask, cpumask_of(cpu));
+ cpumask_setall(desc->affinity);
}
/* Direct IPI vectors. */
--
2.51.0
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |