WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH 4/6] xen-gntdev: Support mapping in HVM domains

To: Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx>
Subject: [Xen-devel] Re: [PATCH 4/6] xen-gntdev: Support mapping in HVM domains
From: Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>
Date: Fri, 04 Mar 2011 11:34:59 -0500
Cc: "jeremy@xxxxxxxx" <jeremy@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Delivery-date: Fri, 04 Mar 2011 08:36:50 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1299254231.17339.10.camel@xxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: National Security Agency
References: <1295625548-22069-1-git-send-email-dgdegra@xxxxxxxxxxxxx> <1295625548-22069-5-git-send-email-dgdegra@xxxxxxxxxxxxx> <20110127185231.GB27853@xxxxxxxxxxxx> <1299254231.17339.10.camel@xxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.13) Gecko/20101209 Fedora/3.1.7-0.35.b3pre.fc13 Thunderbird/3.1.7
On 03/04/2011 10:57 AM, Ian Campbell wrote:
> On Thu, 2011-01-27 at 18:52 +0000, Konrad Rzeszutek Wilk wrote:
>>> @@ -179,11 +184,32 @@ static void gntdev_put_map(struct grant_map *map)
>>>  
>>>       atomic_sub(map->count, &pages_mapped);
>>>  
>>> -     if (map->pages)
>>> +     if (map->pages) {
>>> +             if (!use_ptemod)
>>> +                     unmap_grant_pages(map, 0, map->count);
>>> +
>>>               for (i = 0; i < map->count; i++) {
>>> -                     if (map->pages[i])
>>> +                     uint32_t check, *tmp;
>>> +                     if (!map->pages[i])
>>> +                             continue;
>>> +                     /* XXX When unmapping in an HVM domain, Xen will
>>> +                      * sometimes end up mapping the GFN to an invalid MFN.
>>> +                      * In this case, writes will be discarded and reads 
>>> will
>>> +                      * return all 0xFF bytes.  Leak these unusable GFNs
>>
>> I forgot to ask, under what version of Xen did you run this? I want to add
>> that to the comment so when it gets fixed we know what the failing version 
>> is.
>>
>>> +                      * until Xen supports fixing their p2m mapping.
>>> +                      */
>>> +                     tmp = kmap(map->pages[i]);
>>> +                     *tmp = 0xdeaddead;
> 
> I've just tripped over this check which faults in my PV guest. Seems to
> be related to the handling failures of map_grant_pages()?
> 
> Was the underlying Xen issue here reported somewhere more obvious than
> this comment buried in a patch to the kernel?
> 
> If not please can you raise it as a separate thread clearly marked as a
> hypervisor issue/question, all I can find is bits and pieces spread
> through the threads associated with this kernel patch. I don't think
> I've got a clear enough picture of the issue from those fragments to
> pull it together into a sensible report.
> 
> Ian.
> 

I think there may be other bugs lurking here with these freed pages; I have
observed that even pages that pass this kmap check can become bad at a later
time. This might be due to TLB issues; I haven't had a chance to debug it.
I do have a patch that prevents the pages that have been granted from being
recycled into general use by the kernel; I hadn't posted it yet because it
didn't resolve the issue completely.

8<---------------------------------------------------------
xen-gntdev: avoid reuse of possibly-bad pages
    
In HVM, the unmap hypercall does not reliably associate a valid MFN with
a just-unmapped GFN. A simple validity test of the page is not
sufficient to determine if it will remain valid; pages have been
observed to remain mapped and later become invalid.
    
Instead of releasing the pages to the allocator, keep them in a list to
reuse their GFNs for future mappings, which should always produce a valid
mapping.

** Note: this patch is an RFC, not for a stable patch queue **
    
Signed-off-by: Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>

diff --git a/drivers/xen/gntdev.c b/drivers/xen/gntdev.c
index d43ff30..b9b1577 100644
--- a/drivers/xen/gntdev.c
+++ b/drivers/xen/gntdev.c
@@ -54,6 +54,12 @@ MODULE_PARM_DESC(limit, "Maximum number of grants that may 
be mapped by "
 
 static atomic_t pages_mapped = ATOMIC_INIT(0);
 
+/* Pages that are unsafe to use except as gntdev pages due to bad PFN/MFN
+ * mapping after an unmap.
+ */
+static LIST_HEAD(page_reuse);
+static DEFINE_SPINLOCK(page_reuse_lock);
+
 static int use_ptemod;
 
 struct gntdev_priv {
@@ -122,13 +128,21 @@ static struct grant_map *gntdev_alloc_map(struct 
gntdev_priv *priv, int count)
            NULL == add->pages)
                goto err;
 
+       spin_lock(&page_reuse_lock);
        for (i = 0; i < count; i++) {
-               add->pages[i] = alloc_page(GFP_KERNEL | __GFP_HIGHMEM);
-               if (add->pages[i] == NULL)
-                       goto err;
+               if (list_empty(&page_reuse)) {
+                       add->pages[i] = alloc_page(GFP_KERNEL | __GFP_HIGHMEM);
+                       if (add->pages[i] == NULL)
+                               goto err_unlock;
+               } else {
+                       add->pages[i] = list_entry(page_reuse.next,
+                                                  struct page, lru);
+                       list_del(&add->pages[i]->lru);
+               }
                add->map_ops[i].handle = -1;
                add->unmap_ops[i].handle = -1;
        }
+       spin_unlock(&page_reuse_lock);
 
        add->index = 0;
        add->count = count;
@@ -136,12 +150,13 @@ static struct grant_map *gntdev_alloc_map(struct 
gntdev_priv *priv, int count)
 
        return add;
 
+err_unlock:
+       for (i = 0; i < count; i++) {
+               if (add->pages[i])
+                       list_add(&add->pages[i]->lru, &page_reuse);
+       }
+       spin_unlock(&page_reuse_lock);
 err:
-       if (add->pages)
-               for (i = 0; i < count; i++) {
-                       if (add->pages[i])
-                               __free_page(add->pages[i]);
-               }
        kfree(add->pages);
        kfree(add->grants);
        kfree(add->map_ops);
@@ -202,29 +217,14 @@ static void gntdev_put_map(struct grant_map *map)
                if (!use_ptemod)
                        unmap_grant_pages(map, 0, map->count);
 
+               spin_lock(&page_reuse_lock);
                for (i = 0; i < map->count; i++) {
                        uint32_t check, *tmp;
                        if (!map->pages[i])
                                continue;
-                       /* XXX When unmapping in an HVM domain, Xen will
-                        * sometimes end up mapping the GFN to an invalid MFN.
-                        * In this case, writes will be discarded and reads will
-                        * return all 0xFF bytes.  Leak these unusable GFNs
-                        * until Xen supports fixing their p2m mapping.
-                        *
-                        * Confirmed present in Xen 4.1-RC3 with HVM source
-                        */
-                       tmp = kmap(map->pages[i]);
-                       *tmp = 0xdeaddead;
-                       mb();
-                       check = *tmp;
-                       kunmap(map->pages[i]);
-                       if (check == 0xdeaddead)
-                               __free_page(map->pages[i]);
-                       else
-                               pr_debug("Discard page %d=%ld\n", i,
-                                       page_to_pfn(map->pages[i]));
+                       list_add(&map->pages[i]->lru, &page_reuse);
                }
+               spin_unlock(&page_reuse_lock);
        }
        kfree(map->pages);
        kfree(map->grants);

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>