WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH 18/18] Nested Virtualization: hap-on-hap

To: Christoph Egger <Christoph.Egger@xxxxxxx>
Subject: Re: [Xen-devel] [PATCH 18/18] Nested Virtualization: hap-on-hap
From: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Date: Fri, 16 Apr 2010 17:10:17 +0100
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Sun, 18 Apr 2010 10:24:26 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <201004161504.48740.Christoph.Egger@xxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <201004151443.52704.Christoph.Egger@xxxxxxx> <201004161504.48740.Christoph.Egger@xxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.18 (2008-05-17)
Hi, 

I'll start by saying that the overall design seems like a good one:
keeping a selection of shadow p2m tables around that follow
guest_p2m(host_p2m(l2gpa)), and updating them on demand.  I think it's
good to start with a simple system like this one, though we might find
that nested Xen performs poorly because of its tendency to discard ASIDs
on every vcpu context switch.

On the down side, the terminology is quite confusing.  There are a few
places where it's not clear what kind of cr3 value is meant, and I
couldn't understand this patch at all until I saw this comment:

> +/* With nested virtualization gva == nested gpa, hence we use paddr_t
> + * to not overflow. */

This is not right - the nested GPA is not a virtual address.  It's yet
another fictional physical address.  Can you please find some better
name for it?  At the moment you have functions called p2m_gva_to_gfn
that don't take a GVA and don't return a GFN.


Anyway, backing away from the detail of the code, I have some more
general design questions:

- How does memory management work in general for the nested p2ms? 
- What happens if we run out of memory?

- What's the performance difference between shadow-on-HAP and HAP-on-HAP?
- What's the difference if the nested hypervisor runs more than
  MAX_NESTEDP2M VMs? 
- How was the figure for MAX_NESTEDP2M arrived at?

- What happens when the host changes the p2m table of the L1 guest? 
  Don't we need some sort of global flush of all the nested p2ms to
  maintain isolation?  Or is it implicitly handled by existing TLB
  shootdowns?

- You seem to have duplicated a lot of the existing p2m-building code
  (type_to_flags, write_p2m_entry, next_level, &c).  Why is that?  I
  thought the whole point of patches 4, 6 and 17 was so that the nested
  p2ms could be handled with the same mechanism as normal p2ms?

Cheers,

Tim.

-- 
Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Principal Software Engineer, XenServer Engineering
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>