This patch seems to cause suspend/resume to fail, at least for a pvops
kernel (I tried 2.6.29/30/31, currently using .30).
The stack trace is lost among a raft of "BUG: recent printk recursion!"
but I think the actual issue is:
[ 32.904966] WARNING: at
/local/scratch/ianc/devel/kernels/linux-2.6/arch/x86/xen/time.c:180
xen_sched_clock+0x6d/0x70()
This is "WARN_ON(state.state != RUNSTATE_running);".
With some debugging I've found that the kernel currently thinks the
runstate is 3 (i.e. RUNSTATE_offline).
I think the issue is that when the guest is suspended the last
deschedule now updates the guest runstate -> RUNSTATE_offline but when
the new VCPU in the new domain is first scheduled the guest_runstate
pointer hasn't yet been configured so the guests copy does not get
updated.
I can't see where the guest runstate pointer is supposed to be either
restored or re-setup on resume. I tried adding a setup_runstate_info to
xen_timer_resume (to match the call in xen_timer_setup) but that seems
like it is already too late -- I still see the warnings trigger. I'm not
sure how this is possible since I thought we were in a stop_machine
section at this point.
Ian.
On Tue, 2009-08-18 at 13:48 +0100, Jan Beulich wrote:
> In order to give guests a hint at whether their vCPU-s are currently
> scheduled (so they can e.g. adapt their behavior in spin loops), update
> the run state area (if registered) also when de-scheduling a vCPU.
>
> Also fix an oversight in the compat mode implementation of
> VCPUOP_register_runstate_memory_area.
>
> Please also consider for the 3.4 and 3.3 branches.
>
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>
>
> --- 2009-08-18.orig/xen/arch/x86/domain.c 2009-08-17 11:37:44.000000000
> +0200
> +++ 2009-08-18/xen/arch/x86/domain.c 2009-08-18 14:18:08.000000000 +0200
> @@ -1265,6 +1265,26 @@ static void paravirt_ctxt_switch_to(stru
> }
> }
>
> +/* Update per-VCPU guest runstate shared memory area (if registered). */
> +static void update_runstate_area(struct vcpu *v)
> +{
> + if ( guest_handle_is_null(runstate_guest(v)) )
> + return;
> +
> +#ifdef CONFIG_COMPAT
> + if ( is_pv_32on64_domain(v->domain) )
> + {
> + struct compat_vcpu_runstate_info info;
> +
> + XLAT_vcpu_runstate_info(&info, &v->runstate);
> + __copy_to_guest(v->runstate_guest.compat, &info, 1);
> + return;
> + }
> +#endif
> +
> + __copy_to_guest(runstate_guest(v), &v->runstate, 1);
> +}
> +
> static inline int need_full_gdt(struct vcpu *v)
> {
> return (!is_hvm_vcpu(v) && !is_idle_vcpu(v));
> @@ -1356,6 +1376,9 @@ void context_switch(struct vcpu *prev, s
> flush_tlb_mask(&dirty_mask);
> }
>
> + if (prev != next)
> + update_runstate_area(prev);
> +
> if ( is_hvm_vcpu(prev) && !list_empty(&prev->arch.hvm_vcpu.tm_list) )
> pt_save_timer(prev);
>
> @@ -1395,21 +1418,8 @@ void context_switch(struct vcpu *prev, s
>
> context_saved(prev);
>
> - /* Update per-VCPU guest runstate shared memory area (if registered). */
> - if ( !guest_handle_is_null(runstate_guest(next)) )
> - {
> - if ( !is_pv_32on64_domain(next->domain) )
> - __copy_to_guest(runstate_guest(next), &next->runstate, 1);
> -#ifdef CONFIG_COMPAT
> - else
> - {
> - struct compat_vcpu_runstate_info info;
> -
> - XLAT_vcpu_runstate_info(&info, &next->runstate);
> - __copy_to_guest(next->runstate_guest.compat, &info, 1);
> - }
> -#endif
> - }
> + if (prev != next)
> + update_runstate_area(next);
>
> schedule_tail(next);
> BUG();
> --- 2009-08-18.orig/xen/arch/x86/x86_64/domain.c 2008-05-13
> 11:02:22.000000000 +0200
> +++ 2009-08-18/xen/arch/x86/x86_64/domain.c 2009-08-18 14:18:08.000000000
> +0200
> @@ -56,7 +56,7 @@ arch_compat_vcpu_op(
> struct vcpu_runstate_info runstate;
>
> vcpu_runstate_get(v, &runstate);
> - XLAT_vcpu_runstate_info(&info, &v->runstate);
> + XLAT_vcpu_runstate_info(&info, &runstate);
> }
> __copy_to_guest(v->runstate_guest.compat, &info, 1);
>
>
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|