[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] PAE issue (32-on-64 work)

  • To: "Jan Beulich" <jbeulich@xxxxxxxxxx>
  • From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
  • Date: Thu, 19 Oct 2006 12:34:43 +0100
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Thu, 19 Oct 2006 04:35:18 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcbzcELKVVxDySZYRUyHPNOHWOTlGwAAWEjw
  • Thread-topic: [Xen-devel] PAE issue (32-on-64 work)

> >Why not just have a fixed per-vcpu L4 and L3, into which the 4 PAE
> >get copied on every cr3 load?
> >It's most analogous to what happens today.
> In the shadowing (PAE, 32bit) case (a code path that, as I said, I'd
> see ripped out). 

Why? It's essential to allow PAE PGDs to live above 4GB, which is a PITA

> In the general 64-bit case, this would add another
> (needless) distinct code path. I think I still like better the idea of
> clearing out the final 518 entries.
> >We've thought of removing the page-size restriction on PAE L3's in
> >past, but it's pretty low down the priority list as it typically
> >cost a great deal of memory.
> Ah. I would have felt different.

Most machines probably only have a hundred processes (we can exclude
kernel threads and threads in general), hence maybe a few hundred KB
wasted, tops.  

If we did remove the size restriction, we'd still want to put them in
their own slab cache rather than the general 32b cache, as you don't
want them being shared with other non-PGD data. This is a PITA that
mandates how we handle shadowing of PAE PGDs in the HVM case where we
can't control what they're allocated alongside.


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.