WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Xen 3.4.1 NUMA support

To: George Dunlap <george.dunlap@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Xen 3.4.1 NUMA support
From: Andre Przywara <andre.przywara@xxxxxxx>
Date: Fri, 13 Nov 2009 15:14:49 +0100
Cc: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <Keir.Fraser@xxxxxxxxxxxxx>, Papagiannis Anastasios <apapag@xxxxxxxxxxxx>
Delivery-date: Fri, 13 Nov 2009 06:16:04 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4AF82FD8.6020409@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <bd4f4a54-5269-42d8-b16d-cbdfaeeba361@default> <4AF82F12.6040400@xxxxxxx> <4AF82FD8.6020409@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.21 (X11/20090329)
George Dunlap wrote:
Andre Przywara wrote:
BTW: Shouldn't we set finally numa=on as the default value?
Is there any data to support the idea that this helps significantly on common systems?
I did some tests on an 8 node machine. I will retry this later on 4-nodes and 2-nodes systems, but I assume similar numbers. I used multiple guests in parallel each running bw_mem of lmbench, which is admittedly quite NUMA sensitive. I cannot publish real numbers (yet?), but the results were dramatic: with numa=on I got the same results for each guest (the same as the native result) when the number of guests was smaller or equal the number of nodes (since each guest got it's own memory controller). If I disabled NUMA aware placement by explicitly specifying cpus="0-31" in the config file or booted with numa=off, the values dropped down by factor 3-5 (!) (even for a few guests) with some variance due to the random nature of core to memory mapping. Overcommitting the nodes (letting multiple guests use each node) lowered the values to about 80% for two guests and 60% for three guests per node, but it never got anywhere close to the numa=off values.
So these results encourage me again to opt for numa=on as the default value.
Keir, I will check if dropping the node containment in the CPU overcommitment case is an option, but what would be the right strategy in that case?
Warn the user?
Don't contain at all?
Contain to more than onde node?

Regards,
Andre.

--
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany
Tel: +49 351 448 3567 12
----to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Karl-Hammerschmidt-Str. 34, 85609 Dornach b. Muenchen
Geschaeftsfuehrer: Andrew Bowd; Thomas M. McCoy; Giuliano Meroni
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel