WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-changelog

[Xen-changelog] [xen-unstable] tools/libxc, hvm: Fix 1G page allocation

To: xen-changelog@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-changelog] [xen-unstable] tools/libxc, hvm: Fix 1G page allocation algorithm
From: Xen patchbot-unstable <patchbot@xxxxxxx>
Date: Sat, 29 Jan 2011 15:05:51 -0800
Delivery-date: Sat, 29 Jan 2011 15:12:19 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-changelog-request@lists.xensource.com?subject=help>
List-id: BK change log <xen-changelog.lists.xensource.com>
List-post: <mailto:xen-changelog@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-changelog>, <mailto:xen-changelog-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-changelog>, <mailto:xen-changelog-request@lists.xensource.com?subject=unsubscribe>
Reply-to: xen-devel@xxxxxxxxxxxxxxxxxxx
Sender: xen-changelog-bounces@xxxxxxxxxxxxxxxxxxx
# HG changeset patch
# User Shan Haitao <haitao.shan@xxxxxxxxx>
# Date 1296212929 0
# Node ID 722f7b7678dcdf3d522d7875a26491e2272aa66d
# Parent  f68570fb00322fcbfb249c84d3aa7a371c6c90ab
tools/libxc, hvm: Fix 1G page allocation algorithm

Currently, cur_pages (which is used as index into page_array for
fetching gfns) is used to judge whether it is proper here to allocated
1G pages. However, cur_pages == page_array[cur_pages] only holds true
when it is below 4G. When it is above 4G, page_array[cur_pages] -
cur_pages = 256M.
As a result, when guest has 10G memory, 8 1G-pages are allocated. But
only 2 of them have their corresponding gfns 1G aligned. The other 6
are forced to split to 2M pages, as their starting gfns are 4G+256M,
5G+256M .................

Inside the patch, true gfns are used instead of cur_pages to fix this
issue.

Signed-off-by: Shan Haitao <haitao.shan@xxxxxxxxx>
Acked-by: George Dunlap <george.dunlap@xxxxxxxxxx>
---
 tools/libxc/xc_hvm_build.c |   24 +++++++++++++-----------
 1 files changed, 13 insertions(+), 11 deletions(-)

diff -r f68570fb0032 -r 722f7b7678dc tools/libxc/xc_hvm_build.c
--- a/tools/libxc/xc_hvm_build.c        Fri Jan 28 06:03:01 2011 +0000
+++ b/tools/libxc/xc_hvm_build.c        Fri Jan 28 11:08:49 2011 +0000
@@ -137,7 +137,7 @@ static int setup_guest(xc_interface *xch
     xen_pfn_t *page_array = NULL;
     unsigned long i, nr_pages = (unsigned long)memsize << (20 - PAGE_SHIFT);
     unsigned long target_pages = (unsigned long)target << (20 - PAGE_SHIFT);
-    unsigned long entry_eip, cur_pages;
+    unsigned long entry_eip, cur_pages, cur_pfn;
     void *hvm_info_page;
     uint32_t *ident_pt;
     struct elf_binary elf;
@@ -215,11 +215,13 @@ static int setup_guest(xc_interface *xch
 
         if ( count > max_pages )
             count = max_pages;
-        
+
+        cur_pfn = page_array[cur_pages];
+
         /* Take care the corner cases of super page tails */
-        if ( ((cur_pages & (SUPERPAGE_1GB_NR_PFNS-1)) != 0) &&
-             (count > (-cur_pages & (SUPERPAGE_1GB_NR_PFNS-1))) )
-            count = -cur_pages & (SUPERPAGE_1GB_NR_PFNS-1);
+        if ( ((cur_pfn & (SUPERPAGE_1GB_NR_PFNS-1)) != 0) &&
+             (count > (-cur_pfn & (SUPERPAGE_1GB_NR_PFNS-1))) )
+            count = -cur_pfn & (SUPERPAGE_1GB_NR_PFNS-1);
         else if ( ((count & (SUPERPAGE_1GB_NR_PFNS-1)) != 0) &&
                   (count > SUPERPAGE_1GB_NR_PFNS) )
             count &= ~(SUPERPAGE_1GB_NR_PFNS - 1);
@@ -227,9 +229,9 @@ static int setup_guest(xc_interface *xch
         /* Attemp to allocate 1GB super page. Because in each pass we only
          * allocate at most 1GB, we don't have to clip super page boundaries.
          */
-        if ( ((count | cur_pages) & (SUPERPAGE_1GB_NR_PFNS - 1)) == 0 &&
+        if ( ((count | cur_pfn) & (SUPERPAGE_1GB_NR_PFNS - 1)) == 0 &&
              /* Check if there exists MMIO hole in the 1GB memory range */
-             !check_mmio_hole(cur_pages << PAGE_SHIFT,
+             !check_mmio_hole(cur_pfn << PAGE_SHIFT,
                               SUPERPAGE_1GB_NR_PFNS << PAGE_SHIFT) )
         {
             long done;
@@ -260,15 +262,15 @@ static int setup_guest(xc_interface *xch
                 count = max_pages;
             
             /* Clip partial superpage extents to superpage boundaries. */
-            if ( ((cur_pages & (SUPERPAGE_2MB_NR_PFNS-1)) != 0) &&
-                 (count > (-cur_pages & (SUPERPAGE_2MB_NR_PFNS-1))) )
-                count = -cur_pages & (SUPERPAGE_2MB_NR_PFNS-1);
+            if ( ((cur_pfn & (SUPERPAGE_2MB_NR_PFNS-1)) != 0) &&
+                 (count > (-cur_pfn & (SUPERPAGE_2MB_NR_PFNS-1))) )
+                count = -cur_pfn & (SUPERPAGE_2MB_NR_PFNS-1);
             else if ( ((count & (SUPERPAGE_2MB_NR_PFNS-1)) != 0) &&
                       (count > SUPERPAGE_2MB_NR_PFNS) )
                 count &= ~(SUPERPAGE_2MB_NR_PFNS - 1); /* clip non-s.p. tail */
 
             /* Attempt to allocate superpage extents. */
-            if ( ((count | cur_pages) & (SUPERPAGE_2MB_NR_PFNS - 1)) == 0 )
+            if ( ((count | cur_pfn) & (SUPERPAGE_2MB_NR_PFNS - 1)) == 0 )
             {
                 long done;
                 unsigned long nr_extents = count >> SUPERPAGE_2MB_SHIFT;

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-changelog] [xen-unstable] tools/libxc, hvm: Fix 1G page allocation algorithm, Xen patchbot-unstable <=