|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH 5/5] x86/xen: Don't register PV spinlock IPI when it isn't going to be used
On Mon, 2021-01-04 at 17:09 -0500, boris.ostrovsky@xxxxxxxxxx wrote:
>
> I actually think this should go further in that only IPI-related ops
> should be conditioned on vector callback presence. The rest are
> generic VCPU routines that are not necessarily interrupt/event-
> related. And if they call something that *is* related then those
> specific routines should decide what to do based on
> xen_have_vector_callback.
Right.
My current patch (that I'm about to test) now looks like this:
commit 8126bf76319257fca0cf0b87fdfaaa42d2c658b6
Author: David Woodhouse <dwmw@xxxxxxxxxxxx>
Date: Mon Jan 4 20:54:05 2021 +0000
x86/xen: Fix xen_hvm_smp_init() when vector callback not available
Not all of the smp_ops should be conditional on the vector callback
being available. Mostly just the IPI-related functions should.
The xen_hvm_smp_prepare_boot_cpu() function does two things, both
of which should still happen if there is no vector callback support.
The call to xen_vcpu_setup() for vCPU0 should still happen as it just
sets up the vcpu_info for CPU0. That does happen for the secondary
vCPUs too, from xen_cpu_up_prepare_hvm().
The second thing xen_hvm_smp_prepare_boot_cpu() does is to call
xen_init_spinlocks(), which should *also* still be happening in the
no-vector-callbacks case, so that it can clear its local xen_pvspin
flag and disable the virt_spin_lock_key accordingly.
Checking xen_have_vector_callback in xen_init_spinlocks() itself would
affect PV guests, so set the global nopvspin flag in xen_hvm_smp_init()
instead, when vector callbacks aren't available.
The xen_hvm_smp_prepare_cpus() function does some IPI-related setup
by calling xen_smp_intr_init() and xen_init_lock_cpu(), which can be
made conditional. And it sets the xen_vcpu_id to XEN_VCPU_ID_INVALID
for all possible CPUS, which does need to happen.
Finally, xen_smp_cpus_done() offlines any vCPU which doesn't fit in
the global shared_info page, if separate vcpu_info placement isn't
available. That part also needs to happen unconditionally.
Signed-off-by: David Woodhouse <dwmw@xxxxxxxxxxxx>
diff --git a/arch/x86/xen/smp_hvm.c b/arch/x86/xen/smp_hvm.c
index f5e7db4f82ab..4c959369f855 100644
--- a/arch/x86/xen/smp_hvm.c
+++ b/arch/x86/xen/smp_hvm.c
@@ -33,9 +33,11 @@ static void __init xen_hvm_smp_prepare_cpus(unsigned int
max_cpus)
int cpu;
native_smp_prepare_cpus(max_cpus);
- WARN_ON(xen_smp_intr_init(0));
- xen_init_lock_cpu(0);
+ if (xen_have_vector_callback) {
+ WARN_ON(xen_smp_intr_init(0));
+ xen_init_lock_cpu(0);
+ }
for_each_possible_cpu(cpu) {
if (cpu == 0)
@@ -64,14 +66,17 @@ static void xen_hvm_cpu_die(unsigned int cpu)
void __init xen_hvm_smp_init(void)
{
- if (!xen_have_vector_callback)
+ smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
+ smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
+ smp_ops.smp_cpus_done = xen_smp_cpus_done;
+
+ if (!xen_have_vector_callback) {
+ nopvspin = true;
return;
+ }
- smp_ops.smp_prepare_cpus = xen_hvm_smp_prepare_cpus;
smp_ops.smp_send_reschedule = xen_smp_send_reschedule;
smp_ops.cpu_die = xen_hvm_cpu_die;
smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
smp_ops.send_call_func_single_ipi =
xen_smp_send_call_function_single_ipi;
- smp_ops.smp_prepare_boot_cpu = xen_hvm_smp_prepare_boot_cpu;
- smp_ops.smp_cpus_done = xen_smp_cpus_done;
}
>
> Also, for the spinlock changes specifically --- I wonder whether it
> would be better to reverse initial value of xen_pvspin and set it to
> 'true' only if initialization succeeds.
I looked at that but it would need to be tristate, since the
'xen_nopvspin' command line option clears it from its default of being
enabled.
Attachment:
smime.p7s
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |