WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [PATCH 05/15] Nested Virtualization: core

To: "Dong, Eddie" <eddie.dong@xxxxxxxxx>, Christoph Egger <Christoph.Egger@xxxxxxx>
Subject: RE: [Xen-devel] [PATCH 05/15] Nested Virtualization: core
From: "Dong, Eddie" <eddie.dong@xxxxxxxxx>
Date: Wed, 18 Aug 2010 16:27:52 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, "Dong, Eddie" <eddie.dong@xxxxxxxxx>
Delivery-date: Wed, 18 Aug 2010 01:33:10 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acs+q9NqHtrjzuT0TVmMdbLolm6bdgAAF2Ug
Thread-topic: [Xen-devel] [PATCH 05/15] Nested Virtualization: core
> +
> +/* The exitcode is in native SVM/VMX format. The forced exitcode
> + * is in generic format.
> + */

Introducing a 3rd format of exitcode is over-complicated IMO.

> +enum nestedhvm_vmexits
> +nestedhvm_vcpu_vmexit(struct vcpu *v, struct cpu_user_regs *regs,
> +                     uint64_t exitcode)
> +{

I doubt about the necessary of this kind of wrapper. 

In single layer virtualization, SVM and VMX have its own handler for each VM 
exit. Only when certain common function is invoked, the control goes from 
SVM/VMX to common one, because they have quit many differences and the savings 
by wrapping that function is really small, however we pay with additional 
complexity in both SVM and VMX side as well as readability and performance. 
Further more, it may limit the flexibility to implement something new for both 
side.

Back to the nested virtualization. I am not fully convinced we need a common 
handler for the VM_entry/exit, at least not for now. It is basically same 
situation with above single layer virtualization. Rather we prefer to jump from 
SVM/VMX to common code when certain common service is requested.

Will that be easier?


> +             }
> +
> +             /* host state has been restored */
> +     }
> +
> +     nestedsvm_vcpu_clgi(v);

This is SVM specific, it is better to be called from SVM code itself.

> +
> +     /* Prepare for running the guest. Do some final SVM/VMX
> +      * specific tweaks if necessary to make it work.
> +      */
> +     rc = hvm_nestedhvm_vcpu_vmexit(v, regs, exitcode);
> +     hvm->nh_hostflags.fields.forcevmexit = 0;
> +     if (rc) {
> +             hvm->nh_hostflags.fields.vmentry = 0;
> +             return NESTEDHVM_VMEXIT_FATALERROR;
> +     }

Thx, Eddie
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>