WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] RE: [PATCH 02/13] Nested Virtualization: data structure

To: Christoph Egger <Christoph.Egger@xxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] RE: [PATCH 02/13] Nested Virtualization: data structure
From: "Dong, Eddie" <eddie.dong@xxxxxxxxx>
Date: Fri, 17 Sep 2010 13:39:30 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: "Dong, Eddie" <eddie.dong@xxxxxxxxx>, Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Delivery-date: Thu, 16 Sep 2010 22:46:29 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <201009011656.32897.Christoph.Egger@xxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <201009011656.32897.Christoph.Egger@xxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: ActJ7OhyskSi9lWyQui04EGhTImwLAMN29og
Thread-topic: [PATCH 02/13] Nested Virtualization: data structure
To speedup the progress of nested virtualization support, I want to take this 
thread to discuss the data structure issue, and I think if we can convergence 
here, the later patch will be much easy.

Given that we have conflict in whether we should introduce new naming space and 
it will impact a lot to the wrapper APIs, I suggest we leave that later once we 
get those convergence stuff checked in to simplify patch rebase etc. We may 
undergo 1: consensus patch -> neutral patch -> fundamental argument, or 2: 
consensus patch -> fundamental argument -> neutral patch.
Here are my comments to the original patch, followed by my proposals.

thx, eddie


Dong, Eddie wrote:
> # HG changeset patch
> # User cegger
> # Date 1283345873 -7200
> Data structures for Nested Virtualization
> 
> diff -r ecec3d163efa -r 32aec447e8a1 xen/include/asm-x86/hvm/hvm.h
> --- a/xen/include/asm-x86/hvm/hvm.h
> +++ b/xen/include/asm-x86/hvm/hvm.h
> @@ -52,7 +52,8 @@ enum hvm_intblk {
>      hvm_intblk_shadow,    /* MOV-SS or STI shadow */
>      hvm_intblk_rflags_ie, /* RFLAGS.IE == 0 */
>      hvm_intblk_tpr,       /* LAPIC TPR too high */
> -    hvm_intblk_nmi_iret   /* NMI blocked until IRET */
> +    hvm_intblk_nmi_iret,  /* NMI blocked until IRET */
> +    hvm_intblk_svm_gif,   /* GIF cleared */

That one is SVM specific, we don't have consensus for now, we can discuss later 
after the fundamental argument is solved.

>  };
> 
>  /* These happen to be the same as the VMX interrupt shadow
> definitions. */ @@ -180,6 +181,8 @@ int
>      hvm_girq_dest_2_vcpu_id(struct domai (hvm_paging_enabled(v) &&
>  ((v)->arch.hvm_vcpu.guest_cr[4] & X86_CR4_PAE)) #define
>      hvm_nx_enabled(v) \ (!!((v)->arch.hvm_vcpu.guest_efer & EFER_NX))
> +#define hvm_svm_enabled(v) \
> +    (!!((v)->arch.hvm_vcpu.guest_efer & EFER_SVME))

That is nested SVM on VMX model. May you remove this to SVM specific tree?
I suggest we have a wrapper: bool_t nestedhvm_is_enabled(), SVM & VMX will 
provide each callback for this.

> 
>  #define hvm_hap_has_1gb(d) \
>      (hvm_funcs.hap_capabilities & HVM_HAP_SUPERPAGE_1GB)
> diff -r ecec3d163efa -r 32aec447e8a1 xen/include/asm-x86/hvm/vcpu.h
> --- a/xen/include/asm-x86/hvm/vcpu.h
> +++ b/xen/include/asm-x86/hvm/vcpu.h
> @@ -35,6 +35,61 @@ enum hvm_io_state {
>      HVMIO_completed
>  };
> 
> +struct nestedhvm {

I don't think it is nestedhvm, but hested_vcpu.

> +    bool_t nh_gif; /* vcpu's GIF, always true on VMX */

No consensus for now.

> +    bool_t nh_guestmode; /* vcpu in guestmode? */

I want to use n1/n2 prefix/postfix for better terminology. guestmode is 
confusing given that L1 is also guest.

> +    void *nh_vm; /* VMCB/VMCS */
> +    size_t nh_vmsize; /* size of VMCB/VMCS */

I don't like pointer + size model. It is more like writting ASM code. Rather I 
would like to move those structure to arch specific tree.
This is part of fundamental argument part, we can handle later.

> +
> +    /* guest vm address of 1st level guest, needed for VMEXIT */
> +    uint64_t nh_vmaddr;
> +    uint64_t nh_vmmaxaddr; /* Maximum supported address */

The term vm is confusing here. 
And whether we should put those here or arch specific tree has no consensus 
yet. It highly depends on if we need the new naming space which I strongly 
against.

> +    void *nh_hostsave;
> +
> +    void *nh_arch; /* SVM/VMX specific data */
> +    size_t nh_arch_size;

ditto

> +
> +    /* Cached real MSR permission bitmaps of the nested guest */
> +    unsigned long *nh_cached_msrpm;
> +    size_t nh_cached_msrpm_size;
> +    /* Merged MSR permission bitmap */
> +    unsigned long *nh_merged_msrpm;
> +    size_t nh_merged_msrpm_size;

VMX doesn't need to support l2 guest MSR bitmap for efficiency reason. Even in 
L1 VMM, we rather prefer on demand software save/restore of MSR, than 
unconditional HW save/restore. (We may have a few in reality) So we won't 
implement that for l2 guest.

Pls move them to SVM tree if SVM needs that.

> +
> +    /* Cache guest cr3/host cr3 the guest sets up for the nested
> guest. +     * Used by Shadow-on-Shadow and Nested-on-Nested.
> +     * nh_vm_guestcr3: in l2 guest physical address space and points
> to +     *     the l2 guest page table
> +     * nh_vm_hostcr3: in l1 guest physical address space and points
> to +     *     the l1 guest nested page table
> +     */
> +    uint64_t nh_vm_guestcr3, nh_vm_hostcr3;

For nh_vm_guestcr3, I didn't see any real usage of this even in SVM, where you 
just need to reset it in nestedhvm_vcpu_reset.
I would suggest you move them to SVM specific tree since nestedvcpu is part of 
VCPU, so you will be easily do that in svmvcpu_reset.


BTW, I didn't see the value of cache here, it is just a memory access. I'd 
rather to see a API to fetch them, rather than to maintain 2 data which is same.

> +    uint32_t nh_guest_asid;

same with nh_vm_guestcr3, no real usage so far, and caching is not necessary 
(we can use a wrapper API).


> +    bool_t nh_flushp2m;
> +    struct p2m_domain *nh_p2m; /* used p2m table for this vcpu */
> +

We may need this 2. How about to rename them to add n1/n2 prefix/postfix? p2m 
should be n2p_to_l0m or n2p_to_l1m?
I suggest to make name as precise as possible especially for nested virt.

> +    /* Only meaningful when forcevmexit flag is set */
> +    struct {
> +        uint64_t exitcode;  /* generic exitcode */
> +        uint64_t exitinfo1; /* additional information to the
> exitcode */ +        uint64_t exitinfo2; /* additional information to
> the exitcode */ +    } nh_forcevmexit;
> +    union {
> +        uint32_t bytes;
> +        struct {
> +            uint32_t rflagsif : 1;
> +            uint32_t vintrmask : 1; /* always cleared on VMX */
> +            uint32_t forcevmexit : 1;
> +            uint32_t vmentry : 1;   /* true during vmentry/vmexit
> emulation */ +            uint32_t reserved : 28;
> +        } fields;
> +    } nh_hostflags;

That is the key argument part.

> +
> +    bool_t nh_hap_enabled;

What I can see is that it is a cache only. Question for the necessary since API 
can do easier.

> +};
> +
> +#define vcpu_nestedhvm(v) ((v)->arch.hvm_vcpu.nestedhvm)

> +
>  struct hvm_vcpu {
>      /* Guest control-register and EFER values, just as the guest
>      sees them. */ unsigned long       guest_cr[5];
> @@ -86,6 +141,8 @@ struct hvm_vcpu {
> 
>      struct tasklet      assert_evtchn_irq_tasklet;
> 
> +    struct nestedhvm    nestedhvm;
> +
>      struct mtrr_state   mtrr;
>      u64                 pat_cr;




Next part is my proposal of the startpoint just for reference. We can gradually 
put more inside.


diff -r 97c202b6d963 xen/include/asm-x86/hvm/vcpu.h
--- a/xen/include/asm-x86/hvm/vcpu.h    Thu Sep 16 10:31:19 2010 +0800
+++ b/xen/include/asm-x86/hvm/vcpu.h    Thu Sep 16 11:52:03 2010 +0800
@@ -35,6 +35,10 @@
     HVMIO_completed
 };

+struct nested_vcpu {
+    bool_t n2_guest;    /* vcpu in layer 2 nested guest */
+};
+
 struct hvm_vcpu {
     /* Guest control-register and EFER values, just as the guest sees them. */
     unsigned long       guest_cr[5];
@@ -85,6 +89,8 @@
     } u;

     struct tasklet      assert_evtchn_irq_tasklet;
+    struct nested_vcpu  nvcpu;
+

     struct mtrr_state   mtrr;
     u64                 pat_cr;
@@ -123,4 +129,12 @@
     unsigned int mmio_large_write_bytes;
 };

+#define is_n2_guest(v)   ((v)->arch.hvm_vcpu.nvcpu.n2_guest)
+
+static inline void enter_n2_guest(struct hvm_vcpu *v, bool_t n2_guest)
+{
+   v->nvcpu.n2_guest = n2_guest;
+   return;
+}
+
 #endif /* __ASM_X86_HVM_VCPU_H__ */
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>