WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] PAE issue (32-on-64 work)

To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] PAE issue (32-on-64 work)
From: "Jan Beulich" <jbeulich@xxxxxxxxxx>
Date: Thu, 19 Oct 2006 11:39:32 +0100
Delivery-date: Thu, 19 Oct 2006 03:38:33 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
As I had expressed before, I'm thinking that the current way of handling the
top level of PAE paging is inappropriate, even after the above-4G adjustments
that cured part of the problem. This is specifically because
- the handling here isn't consistent with how hardware behaves in the same
situation (though the Xen behavior is probably within range of the generic
architecture specification), in that the processor reads the 4 top level entries
when CR3 gets re-loaded (and hence doesn't try to access them later in any
way), while Xen treats them (including potential updates to them) like just
on any level in the hierarchy
- the guest still needs to allocate a full page, even though only the first 32
bytes of it are actually used
- the shadowing done in Xen could be avoided altogether by following
hardware behavior.

Just now I found that there is a resulting issue for the 32on64 work I'm
doing: Since none of the entries 4...511 of the PMD get initialized in Linux,
and since Xen nevertheless has to validate all 512 entries (in order to
avoid making available translations that could be used during speculative
execution), the validation has the potential to fail (and does in reality),
resulting in the guest dying. The only option I presently see is to special
case the compatibility guest in the l3 handling and (I really hate to do
that) clear out the 518 supposedly unused entries (or at least clear
their present bits), meaning that no guest may ever make clever
assumptions and try to store some other data in the unused portion of
the pgd page.

Thanks for sharing any other ideas on how to overcome this problem,
Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel