WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH] enable pvhvm vcpu placement in kernel

On Tue, 8 Nov 2011, Konrad Rzeszutek Wilk wrote:
> On Thu, Oct 27, 2011 at 10:28:59PM -0700, zhenzhong.duan@xxxxxxxxxx wrote:
> > pvhvm running with more than 32 vcpus and pv_irq/pv_time enabled 
> > need vcpu placement to work, or else it will softlockup.
> 
> Stefano?

Ack.

> > 
> > Signed-off-by: Zhenzhong Duan <zhenzhong.duan@xxxxxxxxxx>
> > ---
> > diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
> > index da8afd5..1f92865 100644
> > --- a/arch/x86/xen/enlighten.c
> > +++ b/arch/x86/xen/enlighten.c
> > @@ -1356,7 +1356,7 @@ static int __cpuinit xen_hvm_cpu_notify(struct 
> > notifier_block *self,
> >     int cpu = (long)hcpu;
> >     switch (action) {
> >     case CPU_UP_PREPARE:
> > -           per_cpu(xen_vcpu, cpu) = 
> > &HYPERVISOR_shared_info->vcpu_info[cpu];
> > +           xen_vcpu_setup(cpu);
> >             if (xen_have_vector_callback)
> >                     xen_init_lock_cpu(cpu);
> >             break;
> > @@ -1386,7 +1386,6 @@ static void __init xen_hvm_guest_init(void)
> >     xen_hvm_smp_init();
> >     register_cpu_notifier(&xen_hvm_cpu_notifier);
> >     xen_unplug_emulated_devices();
> > -   have_vcpu_info_placement = 0;
> >     x86_init.irqs.intr_init = xen_init_IRQ;
> >     xen_hvm_init_time_ops();
> >     xen_hvm_init_mmu_ops();
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>