WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] PAE issue (32-on-64 work)

To: "Jan Beulich" <jbeulich@xxxxxxxxxx>
Subject: RE: [Xen-devel] PAE issue (32-on-64 work)
From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
Date: Thu, 19 Oct 2006 12:34:43 +0100
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 19 Oct 2006 04:35:18 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <45377204.76E4.0078.0@xxxxxxxxxx> <3AAA99889D105740BE010EB6D5A5A3B20506E0@xxxxxxxxxxxxxxxxxxxxxxxxxx> <45377B3F.76E4.0078.0@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcbzcELKVVxDySZYRUyHPNOHWOTlGwAAWEjw
Thread-topic: [Xen-devel] PAE issue (32-on-64 work)
> >Why not just have a fixed per-vcpu L4 and L3, into which the 4 PAE
L3's
> >get copied on every cr3 load?
> >It's most analogous to what happens today.
> 
> In the shadowing (PAE, 32bit) case (a code path that, as I said, I'd
rather
> see ripped out). 

Why? It's essential to allow PAE PGDs to live above 4GB, which is a PITA
otherwise. 

> In the general 64-bit case, this would add another
> (needless) distinct code path. I think I still like better the idea of
> clearing out the final 518 entries.
> 
> >We've thought of removing the page-size restriction on PAE L3's in
the
> >past, but it's pretty low down the priority list as it typically
doesn't
> >cost a great deal of memory.
> 
> Ah. I would have felt different.

Most machines probably only have a hundred processes (we can exclude
kernel threads and threads in general), hence maybe a few hundred KB
wasted, tops.  

If we did remove the size restriction, we'd still want to put them in
their own slab cache rather than the general 32b cache, as you don't
want them being shared with other non-PGD data. This is a PITA that
mandates how we handle shadowing of PAE PGDs in the HVM case where we
can't control what they're allocated alongside.

Ian



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel