WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-changelog

[Xen-changelog] [xen-3.1-testing] hvm: Allocate memory for hvm domains i

To: xen-changelog@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-changelog] [xen-3.1-testing] hvm: Allocate memory for hvm domains in batches.
From: "Xen patchbot-3.1-testing" <patchbot-3.1-testing@xxxxxxxxxxxxxxxxxxx>
Date: Wed, 09 Apr 2008 09:10:39 -0700
Delivery-date: Wed, 09 Apr 2008 09:11:09 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-changelog-request@lists.xensource.com?subject=help>
List-id: BK change log <xen-changelog.lists.xensource.com>
List-post: <mailto:xen-changelog@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-changelog>, <mailto:xen-changelog-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-changelog>, <mailto:xen-changelog-request@lists.xensource.com?subject=unsubscribe>
Reply-to: xen-devel@xxxxxxxxxxxxxxxxxxx
Sender: xen-changelog-bounces@xxxxxxxxxxxxxxxxxxx
# HG changeset patch
# User Keir Fraser <keir.fraser@xxxxxxxxxx>
# Date 1207756754 -3600
# Node ID f09ebcbd239e3572ee8b607d5aa16595614de838
# Parent  a5f7058a959f8597a809350f37aae00086da3ac6
hvm: Allocate memory for hvm domains in batches.

Without this change, dom0 is unresponsive while the hvm domain's
physmap is populated in xen.

Signed-off-by: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
xen-unstable changeset:   17416:aee133a8e5e72bc9a6da4bb1619931992da3b6ff
xen-unstable date:        Wed Apr 09 15:25:16 2008 +0100
---
 tools/libxc/xc_hvm_build.c |   19 +++++++++++++++----
 1 files changed, 15 insertions(+), 4 deletions(-)

diff -r a5f7058a959f -r f09ebcbd239e tools/libxc/xc_hvm_build.c
--- a/tools/libxc/xc_hvm_build.c        Wed Apr 09 15:00:25 2008 +0100
+++ b/tools/libxc/xc_hvm_build.c        Wed Apr 09 16:59:14 2008 +0100
@@ -165,7 +165,7 @@ static int setup_guest(int xc_handle,
 {
     xen_pfn_t *page_array = NULL;
     unsigned long i, nr_pages = (unsigned long)memsize << (20 - PAGE_SHIFT);
-    unsigned long shared_page_nr;
+    unsigned long shared_page_nr, cur_pages;
     struct xen_add_to_physmap xatp;
     struct shared_info *shared_info;
     void *e820_page;
@@ -215,12 +215,23 @@ static int setup_guest(int xc_handle,
     for ( i = HVM_BELOW_4G_RAM_END >> PAGE_SHIFT; i < nr_pages; i++ )
         page_array[i] += HVM_BELOW_4G_MMIO_LENGTH >> PAGE_SHIFT;
 
-    /* Allocate memory for HVM guest, skipping VGA hole 0xA0000-0xC0000. */
+    /*
+     * Allocate memory for HVM guest, skipping VGA hole 0xA0000-0xC0000.
+     * We allocate pages in batches of no more than 2048 to ensure that
+     * we can be preempted and hence dom0 remains responsive.
+     */
     rc = xc_domain_memory_populate_physmap(
         xc_handle, dom, 0xa0, 0, 0, &page_array[0x00]);
-    if ( rc == 0 )
+    cur_pages = 0xc0;
+    while ( (rc == 0) && (nr_pages > cur_pages) )
+    {
+        unsigned long count = nr_pages - cur_pages;
+        if ( count > 2048 )
+            count = 2048;
         rc = xc_domain_memory_populate_physmap(
-            xc_handle, dom, nr_pages - 0xc0, 0, 0, &page_array[0xc0]);
+            xc_handle, dom, count, 0, 0, &page_array[cur_pages]);
+        cur_pages += count;
+    }
     if ( rc != 0 )
     {
         PERROR("Could not allocate memory for HVM guest.\n");

_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-changelog] [xen-3.1-testing] hvm: Allocate memory for hvm domains in batches., Xen patchbot-3.1-testing <=