WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH 07/17] vmx: nest: handling VMX instruction exits

On Thu, 2010-05-20 at 18:53 +0800, Tim Deegan wrote:
> At 10:41 +0100 on 22 Apr (1271932879), Qing He wrote:
> > +    else
> > +    {
> > +        decode->type = VMX_INST_MEMREG_TYPE_MEMORY;
> > +        hvm_get_segment_register(v, sreg_to_index[info.fields.segment], 
> > &seg);
> > +        seg_base = seg.base;
> > +
> > +        base = info.fields.base_reg_invalid ? 0 :
> > +            reg_read(regs, info.fields.base_reg);
> > +
> > +        index = info.fields.index_reg_invalid ? 0 :
> > +            reg_read(regs, info.fields.index_reg);
> > +
> > +        scale = 1 << info.fields.scaling;
> > +
> > +        disp = __vmread(EXIT_QUALIFICATION);
> > +
> > +
> > +        decode->mem = seg_base + base + index * scale + disp;
> > +        decode->len = 1 << (info.fields.addr_size + 1);
> 
> Don't we need to check the segment limit, type &c here? 

Definitely. I knew that a lot of error handling is missing, and
particularly, not handling errors of hvm_copy_from_user is
nearly unacceptable. But since it was RFC, I decided to show the
algorithm first

I'll fix the missing error handling in the next version.

> > +    case VMFAIL_VALID:
> > +        /* TODO: error number of VMFailValid */
> 
>  ? :)

There is a long list of VMFail error numbers, but VMMs typically
dont't care about them very much.

> > +    hvm_copy_to_guest_phys(nest->gvmcs_pa, nest->vvmcs, PAGE_SIZE);
> 
> Do we care about failure here?  
> 
> > +    ASSERT(decode.type == VMX_INST_MEMREG_TYPE_MEMORY);
> > +    hvm_copy_from_guest_virt(&gpa, decode.mem, decode.len, 0);
> 
> We _definitely_ care about failure here!  We need to inject #PF rather
> than just using zero (and #GP/#SS based on the segment limit check I
> mentioned above).
> 
> Also somewhere we should be checking CR0.PE, CR4.VMXE and RFLAGS.VM and
> returning #UD if they're not correct.  And checking that CPL == 0, too.
> 

Yes, and I think I forgot about CPL == 0, that is an important check.

> > +    nest->vvmcs = alloc_xenheap_page();
> > +    if ( !nest->vvmcs )
> > +    {
> > +        gdprintk(XENLOG_ERR, "nest: allocation for virtual vmcs failed\n");
> > +        vmreturn(regs, VMFAIL_INVALID);
> > +        goto out;
> > +    }
> 
> Could we just take a writeable refcount of the guest memory rather than
> allocating our own copy?  ISTR the guest's not allowed to write directly
> to the VMCS memory anyway.  It would be expensive on 32-bit Xen (because
> of having to map/unmap all the time) but cheaper on 64-bit Xen (by
> skipping various 4k memcpy()s)
> 

The original intent is to make it more analogous to possible hardware
solution (that the memory is not gauranteed to be usable until an
explicit vmclear). However, we do have a so called `PV VMCS' patch that
does what you want (so the guest can manipulate it directly).

On a second thought now, I think there is really no special benefit not
to map it directly. I'll change it to use it.

> > +int vmx_nest_handle_vmxoff(struct cpu_user_regs *regs)
> > +{
> 
> Needs error handling...
> 
> > +    ASSERT(decode.type == VMX_INST_MEMREG_TYPE_MEMORY);
> > +    hvm_copy_from_guest_virt(&gpa, decode.mem, decode.len, 0);
> 
> Error handling... #PF, segments, CPL != 0
> 
> > +    if ( nest->vmcs_invalid )
> > +    {
> > +        hvm_copy_from_guest_phys(nest->vvmcs, nest->gvmcs_pa, PAGE_SIZE);
> 
> I think you know what I'm going to say here. :)  Apart from the error
> paths the rest of this patch looks OK to me. 

I'll revise them.

Thanks,
Qing

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>