WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: xl/xm save -c fails - set_vcpucontext EOPNOTSUPP (was Re: [Xen-devel

To: Ian Campbell <Ian.Campbell@xxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxxxx>
Subject: Re: xl/xm save -c fails - set_vcpucontext EOPNOTSUPP (was Re: [Xen-devel] xl save -c issues with Windows 7 Ultimate)
From: Shriram Rajagopalan <rshriram@xxxxxxxxx>
Date: Tue, 10 May 2011 09:52:54 -0500
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 10 May 2011 07:55:58 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1305016915.26692.261.camel@xxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <BANLkTi=a4=uNLYSA+0FEX+oX=iBmStn3aA@xxxxxxxxxxxxxx> <1305016915.26692.261.camel@xxxxxxxxxxxxxxxxxxxxxx>
Reply-to: rshriram@xxxxxxxxx
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Tue, May 10, 2011 at 3:41 AM, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
On Tue, 2011-05-10 at 00:06 +0100, Shriram Rajagopalan wrote:
> I was testing xl/xm checkpoint with the latest c/s in the repo, 23300.
> neither xl nor xm seem to work. The error code is 95 (EOPNOTSUPP).
>
> Migration works but not checkpointing. While doing a
> xc_domain_resume,
> the "modify_returncode" phase (for suspend_cancel) fails. Tracing
> through
> the control flow, I found that the hypercall for set_vcpucontext
> (in do_xen_hypercall() from xc_private.c) fails with this error code.
>
> I have tested this with a 64-bit 2.6.39 and 32-bit 2.6.18 pv domU.
> Any help would be great.

Are we still talking about HVM guests?

No! its all PV. There is a 2.6.39-rc1 debian guest and a 2.6.18 standard
xenlinux kernel based debian guest.
The most plausible looking EOPNOTSUPP from that code is in
xen/arch/x86/domain.c:arch_set_info_guest() but that is on a PV only
path.

And that rings with the pv guests I am using. It makes perfect sense, looking
at that function and especially at the code that returns EOPNOTSUPP (the only
place in the entire file).
   else
    {
        bool_t fail = v->arch.pv_vcpu.ctrlreg[3] != c(ctrlreg[3]);

#ifdef CONFIG_X86_64
        fail |= v->arch.pv_vcpu.ctrlreg[1] != c(ctrlreg[1]);
#endif

        for ( i = 0; i < ARRAY_SIZE(v->arch.pv_vcpu.gdt_frames); ++i )
            fail |= v->arch.pv_vcpu.gdt_frames[i] != c(gdt_frames[i]);
        fail |= v->arch.pv_vcpu.gdt_ents != c(gdt_ents);

        fail |= v->arch.pv_vcpu.ldt_base != c(ldt_base);
        fail |= v->arch.pv_vcpu.ldt_ents != c(ldt_ents);

        if ( fail )
           return -EOPNOTSUPP;
    }

This change was introduced by c/s
changeset:   23142:f5e8d152a565
user:        Jan Beulich <jbeulich@xxxxxxxxxx>
date:        Tue Apr 05 13:01:25 2011 +0100
x86: split struct vcpu

I think I am missing something really obvious in this piece of code. The
xc_domain_resume code tries to modify the return value of shutdown hypercall
(i.e eax register is set to 1) and this code doesnt seem to check those registers.


There are only a small number of uses of EOPNOTSUPP in the hypervisor
and the rest are all in xen/arch/x86/hvm/hvm.c or
xen/arch/x86/hvm/nestedhvm.c and all are in nestedhvm related functions.
I guess you aren't using nested HVM though?!

Nope
Ian.


shriram
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>