This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] [PATCH 2/6] xen: Add NUMA support to Xen

To: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH 2/6] xen: Add NUMA support to Xen
From: Ryan Harper <ryanh@xxxxxxxxxx>
Date: Tue, 2 May 2006 09:53:45 -0500
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, Ryan Grimm <grimm@xxxxxxxxxx>
Delivery-date: Tue, 02 May 2006 07:54:08 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <c3edf3c621b96acfe644e284bff8f241@xxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20060501215708.GV16776@xxxxxxxxxx> <c3edf3c621b96acfe644e284bff8f241@xxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.6+20040907i
* Keir Fraser <Keir.Fraser@xxxxxxxxxxxx> [2006-05-02 09:18]:
> On 1 May 2006, at 22:57, Ryan Harper wrote:
> >This patch introduces a per-node layer to the buddy allocator.  Xen 
> >currently
> >defines the heap as a two-dimensional array, [zone][order].  This 
> >patch adds a
> >node layer between zone and order.  This allows Xen to hand memory out 
> >in the
> >proper zone while preferring local memory allocation, but can 
> >fall-back on
> >non-local to satisfy a zone request.
> Loops over every memory chunk structure on the alloc/free paths aren't 
> going to get merged. There's no need for it -- in most cases memory 
> chunks are probably aligned on a MAX_ORDER boundary (or they will be 
> when I reduce MAX_ORDER, which requires me to fix up our Linux swiotlb 
> a bit first). When that isn't the case you can simply reserve guard 
> pages at the start and end of such chunks to avoid cross-chunk merging.

I'll toss page_spans_chunk() and the user in the free path, use some
guard pages and resubmit.  page_to_node still uses the chunk array to
determine which node a struct page_info belongs to, which is used in the
free path.  Is that acceptable?

Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
(512) 838-9253   T/L: 678-9253

Xen-devel mailing list