WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [PATCH 3/6][RESEND] xen: Add NUMA support to Xen

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] [PATCH 3/6][RESEND] xen: Add NUMA support to Xen
From: Ryan Harper <ryanh@xxxxxxxxxx>
Date: Fri, 12 May 2006 10:12:38 -0500
Cc: Ryan Grimm <grimm@xxxxxxxxxx>
Delivery-date: Fri, 12 May 2006 08:15:04 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20060501215722.GW16776@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20060501215722.GW16776@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.6+20040907i
* Ryan Harper <ryanh@xxxxxxxxxx> [2006-05-01 17:00]:
> This patch modifies the increase_reservation and populate_physmap
> hypercalls used to allocate memory to a domain.  With numa support
> enabled we balance the allocation by using the domain's vcpu placement
> as a method of distributing the pages locally to the physical cpu the
> vcpus will run upon.

Updated to remove CONFIG_NUMA ifdefs.

-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
(512) 838-9253   T/L: 678-9253
ryanh@xxxxxxxxxx


diffstat output:
 memory.c |   26 ++++++++++++++++++++++----
 1 files changed, 22 insertions(+), 4 deletions(-)

Signed-off-by: Ryan Harper <ryanh@xxxxxxxxxx>
Signed-off-by: Ryan Grimm <grimm@xxxxxxxxxx>
---
# HG changeset patch
# User Ryan Harper <ryanh@xxxxxxxxxx>
# Node ID b92d38d9be2808b73dd87e0f3d61858540dc8f69
# Parent  15fba7ca6975a8aef6cb5d290767aeacc7304cd8
This patch modifies the increase_reservation and populate_physmap 
hypercalls used to allocate memory to a domain.  We balance the 
allocation by using the domain's vcpu placement as a method of 
distributing the pages locally to the physical cpu the vcpus will 
run upon.

diff -r 15fba7ca6975 -r b92d38d9be28 xen/common/memory.c
--- a/xen/common/memory.c       Thu May 11 20:48:10 2006
+++ b/xen/common/memory.c       Thu May 11 20:49:50 2006
@@ -39,6 +39,12 @@
 {
     struct page_info *page;
     unsigned long     i, mfn;
+    int max_vcpu_id = 0;
+    struct vcpu *v;
+
+    for_each_vcpu (d, v) 
+        if ( v->vcpu_id > max_vcpu_id )
+            max_vcpu_id = v->vcpu_id;
 
     if ( !guest_handle_is_null(extent_list) &&
          !guest_handle_okay(extent_list, nr_extents) )
@@ -56,8 +62,11 @@
             return i;
         }
 
-        if ( unlikely((page = alloc_domheap_pages(
-            d, extent_order, flags)) == NULL) )
+        /* spread each allocation across the total number of 
+         * vcpus allocated to this domain */
+        if ( unlikely((page = __alloc_domheap_pages( d, 
+            (d->vcpu[i % (max_vcpu_id+1)])->processor,
+            extent_order, flags )) == NULL) ) 
         {
             DPRINTK("Could not allocate order=%d extent: "
                     "id=%d flags=%x (%ld of %d)\n",
@@ -88,6 +97,12 @@
 {
     struct page_info *page;
     unsigned long    i, j, gpfn, mfn;
+    int max_vcpu_id = 0;
+    struct vcpu *v;
+
+    for_each_vcpu (d, v) 
+        if ( v->vcpu_id > max_vcpu_id )
+            max_vcpu_id = v->vcpu_id;
 
     if ( !guest_handle_okay(extent_list, nr_extents) )
         return 0;
@@ -107,8 +122,11 @@
         if ( unlikely(__copy_from_guest_offset(&gpfn, extent_list, i, 1)) )
             goto out;
 
-        if ( unlikely((page = alloc_domheap_pages(
-            d, extent_order, flags)) == NULL) )
+        /* spread each allocation across the total number of 
+         * vcpus allocated to this domain */
+        if ( unlikely((page = __alloc_domheap_pages( d, 
+            (d->vcpu[i % (max_vcpu_id+1)])->processor,
+            extent_order, flags )) == NULL) ) 
         {
             DPRINTK("Could not allocate order=%d extent: "
                     "id=%d flags=%x (%ld of %d)\n",

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>