WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Memory allocation in NUMA system

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Memory allocation in NUMA system
From: "Yang, Xiaowei" <xiaowei.yang@xxxxxxxxx>
Date: Fri, 25 Jul 2008 15:22:12 +0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 25 Jul 2008 00:22:40 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C4AF36FD.1B8CF%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <C4AF36FD.1B8CF%keir.fraser@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.14 (X11/20080505)
Keir Fraser wrote:
On 25/7/08 04:34, "Yang, Xiaowei" <xiaowei.yang@xxxxxxxxx> wrote:

 > Let's say we have a 2-node system, with node0 and node1's memory range
 > being 0-0xc0000000 (<4G) and 0x100000000-0x1c0000000 (>4G) respectively.
 > In that case, node1's memory is always preferred for domain memory
 > allocation, no matter which node the created domain is pinned to. It
 > results in performance penalty.
 >
 > One possible fix is to specify all range for the domain memory
 > allocation, which means local memory is preferred. This change may be
 > restricted only to the domain pinned to one node for less impact.
 >
 > One side effect is that the DMA memory size may be smaller, which makes
 > device domain unhappy. This can be addressed by reserving node0 to be
 > used lastly.

Doesn't your solution amount to what we already do, for the 2-node example?
i.e., node0 would not be chosen until node1 is exhausted?

Oh, what I mean is:
With the above possible fix, the domain memory is allocated from the node it pinned to. As node0's memory is precious for DMA, it's suggested to pin VMs to other nodes firstly.

And for non-pinned VM, we can stick to the original method.

Thanks,
Xiaowei

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel