WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH 10/14] Nested Virtualization: svm specific implem

To: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH 10/14] Nested Virtualization: svm specific implementation
From: Christoph Egger <Christoph.Egger@xxxxxxx>
Date: Tue, 17 Aug 2010 17:49:55 +0200
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 17 Aug 2010 08:51:10 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20100809125731.GB13291@xxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <201008051704.03074.Christoph.Egger@xxxxxxx> <20100809125731.GB13291@xxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.9.10
On Monday 09 August 2010 14:57:31 Tim Deegan wrote:
> > @@ -692,8 +732,10 @@ static void svm_ctxt_switch_to(struct vc
> >  static void svm_do_resume(struct vcpu *v)
> >  {
> >      bool_t debug_state = v->domain->debugger_attached;
> > -
> > -    if ( unlikely(v->arch.hvm_vcpu.debug_state_latch != debug_state) )
> > +    bool_t guestmode = nestedhvm_vcpu_in_guestmode(v);
> > +
> > +    if ( !guestmode &&
> > +        unlikely(v->arch.hvm_vcpu.debug_state_latch != debug_state) )
> >      {
> >          uint32_t mask = (1U << TRAP_debug) | (1U << TRAP_int3);
> >          v->arch.hvm_vcpu.debug_state_latch = debug_state;
> > @@ -712,11 +754,14 @@ static void svm_do_resume(struct vcpu *v
> >          hvm_asid_flush_vcpu(v);
> >      }
> >
> > -    /* Reflect the vlapic's TPR in the hardware vtpr */
> > -    v->arch.hvm_svm.vmcb->vintr.fields.tpr =
> > -        (vlapic_get_reg(vcpu_vlapic(v), APIC_TASKPRI) & 0xFF) >> 4;
> > -
> > -    hvm_do_resume(v);
> > +    if ( !guestmode )
> > +    {
> > +        /* Reflect the vlapic's TPR in the hardware vtpr */
> > +        v->arch.hvm_svm.vmcb->vintr.fields.tpr =
> > +            (vlapic_get_reg(vcpu_vlapic(v), APIC_TASKPRI) & 0xFF) >> 4;
> > +
> > +        hvm_do_resume(v);
> > +    }
>
> Can you explain why we shouldn't sync the vTPR and the vlapic state when
> the guest is in nested mode?

When the vcpu is in guest mode then v->arch.hvm_svm.vmcb->vintr.fields.tpr
represents the tpr of the l2 guest. The l2 guest is not allowed to touch the
l1 guest's vTPR.

> [...]
>
> > +    } else {
> > +        /* host shadow paging + guest shadow paging. */
> > +        host_vmcb->np_enable = 0;
> > +        host_vmcb->h_cr3 = 0x0;
> > +
> > +#if 0
> > +        host_vmcb->cr3 = v->shadow_shadow_table;
> > +
> > +        /* hvm_set_cr3() below sets v->arch.hvm_vcpu.guest_cr[3] for us.
> > */ +        rc = hvm_set_cr3(ns_vmcb->cr3);
> > +        if (rc != X86EMUL_OKAY)
> > +            gdprintk(XENLOG_ERR, "hvm_set_cr3 failed, rc: %u\n", rc);
> > +#endif
> > +    }
> > +
>
> Please remove this.

Done in local tree. We need to revisit this when Intel comes with their
shadow-on-shadow implementation.

> [...]
>
> > +static void
> > +svm_vmexit_do_vmload(struct vmcb_struct *vmcb,
> > +                     struct cpu_user_regs *regs,
> > +                     struct vcpu *v, uint64_t vmcbaddr)
> > +{
> > +    int ret;
> > +    unsigned int inst_len;
> > +    struct vmcb_struct *tmp_vmcb;
> > +
> > +    if ( (inst_len = __get_instruction_length(v, INSTR_VMLOAD)) == 0 )
> > +        return;
> > +
> > +    /* tmp_vmcb can't be a local variable on the stack because
> > +     * the machine stops with a sudden freeze.
> > +     */
>
> A VMCB structure is large, and Xen stacks are small. :)  xmalloc()ing it
> on every call seems a bit fragile though - and silently returning if the
> xmalloc() fails is wrong.  Is this something that could be allocated
> once at VCPU creation time?

Yes. Done in local tree.

> > +    tmp_vmcb = xmalloc(struct vmcb_struct);
> > +    if (tmp_vmcb == NULL)
> > +        return;
> > +
>
> [...]
>
> > @@ -1623,11 +2895,25 @@ asmlinkage void svm_vmexit_handler(struc
> >
> >      case VMEXIT_MONITOR:
> >      case VMEXIT_MWAIT:
> > +        hvm_inject_exception(TRAP_invalid_op, HVM_DELIVER_NO_ERROR_CODE,
> > 0); +        break;
> > +
> >      case VMEXIT_VMRUN:
> > +        svm_vmexit_do_vmrun(regs, v,
> > +                            regs->eax);
> > +        break;
>
> svm_vmexit_do_vmrun() and the other routines called below don't seem to
> check that nested HVM is enabled; that seems like it would be bad if a
> non-nesting guest tries to execute VMRUN &c.  Or have you changed the
> intercept bits somehow so that never happens?

No, you're right. When nestedhvm is disabled then the instructions should
result in a #UD. Fixed in local tree.

Christoph


> Cheers,
>
> Tim.
>
> >      case VMEXIT_VMLOAD:
> > +        svm_vmexit_do_vmload(vmcb, regs, v, regs->eax);
> > +        break;
> >      case VMEXIT_VMSAVE:
> > +        svm_vmexit_do_vmsave(vmcb, regs, v, regs->eax);
> > +        break;
> >      case VMEXIT_STGI:
> > +        svm_vmexit_do_stgi(regs, v);
> > +        break;
> >      case VMEXIT_CLGI:
> > +        svm_vmexit_do_clgi(regs, v);
> > +        break;
> >      case VMEXIT_SKINIT:
> >          hvm_inject_exception(TRAP_invalid_op, HVM_DELIVER_NO_ERROR_CODE,
> > 0); break;



-- 
---to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Einsteinring 24, 85609 Dornach b. Muenchen
Geschaeftsfuehrer: Alberto Bozzo, Andrew Bowd
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel