WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] __supported_pte_mask breaks PROT_NONE pages

To: Ingo Molnar <mingo@xxxxxxx>, "H. Peter Anvin" <hpa@xxxxxxxxx>
Subject: [Xen-devel] __supported_pte_mask breaks PROT_NONE pages
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Wed, 04 Feb 2009 18:33:38 -0800
Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, Linux Kernel Mailing List <linux-kernel@xxxxxxxxxxxxxxx>, William Lee Irwin III <wli@xxxxxxxxxxxxxx>
Delivery-date: Wed, 04 Feb 2009 18:34:12 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.19 (X11/20090105)
On an x86 system which doesn't support global mappings, __supported_pte_mask has _PAGE_GLOBAL clear, to make sure it never appears in the PTE. pfn_pte() and so on will enforce it with:

static inline pte_t pfn_pte(unsigned long page_nr, pgprot_t pgprot)
{
        return __pte((((phys_addr_t)page_nr << PAGE_SHIFT) |
                      pgprot_val(pgprot)) & __supported_pte_mask);
}


However, we overload _PAGE_GLOBAL with _PAGE_PROTNONE on non-present ptes to distinguish them from swap entries. However, applying __supported_pte_mask indiscriminately will clear the bit and corrupt the pte.

I guess the best fix is to only apply __supported_pte_mask to present ptes. This seems like the right solution to me, as it means we can completely ignore the issue of overlaps between the present pte bits and the non-present pte-as-swap entry use of the bits. (Patch below.)

Alternatively we could filter the unwanted bits in set_pte and so on, but that seems to undermine the utility of __supported_pte_mask.

   J

Subject: x86: don't apply __supported_pte_mask to non-present ptes

__supported_pte_mask contains the set of flags we support on the
current hardware.  We also use bits in the pte for things like
logically present ptes with no permissions, and swap entries for
swapped out pages.  We should only apply __supported_pte_mask to
present ptes, because otherwise we may destroy other information being
stored in the ptes.

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@xxxxxxxxxx>
---
arch/x86/include/asm/pgtable.h  |   26 ++++++++++++++++++++------
arch/x86/include/asm/xen/page.h |    2 +-
2 files changed, 21 insertions(+), 7 deletions(-)

===================================================================
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -316,16 +316,30 @@

extern pteval_t __supported_pte_mask;

+/*
+ * Mask out unsupported bits in a present pgprot.  Non-present pgprots
+ * can use those bits for other purposes, so leave them be.
+ */
+static inline pgprotval_t massage_pgprot(pgprot_t pgprot)
+{
+       pgprotval_t protval = pgprot_val(pgprot);
+
+       if (protval & _PAGE_PRESENT)
+               protval &= __supported_pte_mask;
+
+       return protval;
+}
+
static inline pte_t pfn_pte(unsigned long page_nr, pgprot_t pgprot)
{
-       return __pte((((phys_addr_t)page_nr << PAGE_SHIFT) |
-                     pgprot_val(pgprot)) & __supported_pte_mask);
+       return __pte(((phys_addr_t)page_nr << PAGE_SHIFT) |
+                    massage_pgprot(pgprot));
}

static inline pmd_t pfn_pmd(unsigned long page_nr, pgprot_t pgprot)
{
-       return __pmd((((phys_addr_t)page_nr << PAGE_SHIFT) |
-                     pgprot_val(pgprot)) & __supported_pte_mask);
+       return __pmd(((phys_addr_t)page_nr << PAGE_SHIFT) |
+                    massage_pgprot(pgprot));
}

static inline pte_t pte_modify(pte_t pte, pgprot_t newprot)
@@ -337,7 +351,7 @@
         * the newprot (if present):
         */
        val &= _PAGE_CHG_MASK;
-       val |= pgprot_val(newprot) & (~_PAGE_CHG_MASK) & __supported_pte_mask;
+       val |= massage_pgprot(newprot) & ~_PAGE_CHG_MASK;

        return __pte(val);
}
@@ -353,7 +367,7 @@

#define pte_pgprot(x) __pgprot(pte_flags(x) & PTE_FLAGS_MASK)

-#define canon_pgprot(p) __pgprot(pgprot_val(p) & __supported_pte_mask)
+#define canon_pgprot(p) __pgprot(massage_pgprot(p))

static inline int is_new_memtype_allowed(unsigned long flags,
                                                unsigned long new_flags)
===================================================================
--- a/arch/x86/include/asm/xen/page.h
+++ b/arch/x86/include/asm/xen/page.h
@@ -137,7 +137,7 @@
        pte_t pte;

        pte.pte = ((phys_addr_t)page_nr << PAGE_SHIFT) |
-               (pgprot_val(pgprot) & __supported_pte_mask);
+                       massage_pgprot(pgprot);

        return pte;
}



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>