WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] RE: Host Numa informtion in dom0

To: "Kamble, Nitin A" <nitin.a.kamble@xxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] RE: Host Numa informtion in dom0
From: Ian Pratt <Ian.Pratt@xxxxxxxxxxxxx>
Date: Fri, 5 Feb 2010 17:39:09 +0000
Accept-language: en-US
Acceptlanguage: en-US
Cc: Ian Pratt <Ian.Pratt@xxxxxxxxxxxxx>
Delivery-date: Fri, 05 Feb 2010 09:39:58 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <8EA2C2C4116BF44AB370468FBF85A7770123904A29@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <8EA2C2C4116BF44AB370468FBF85A7770123904A29@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcqhN4DFeKntxDxZTHGQrdKilppA4QAi60tw
Thread-topic: Host Numa informtion in dom0
>    Attached is the patch which exposes the host numa information to dom0.
> With the patch "xm info" command now also gives the cpu topology & host
> numa information. This will be later used to build guest numa support.
> 
> The patch basically changes physinfo sysctl, and adds topology_info &
> numa_info sysctls, and also changes the python & libxc code accordingly.


It would be good to have a discussion about how we should expose NUMA 
information to guests. 

I believe we can control the desired allocation of memory from nodes and 
creation of guest NUMA tables using VCPU affinity masks combined with a new 
boolean option to enable exposure of NUMA information to guests.

For each guest VCPU, we should inspect its affinity mask to see which nodes the 
VCPU is able to run on, thus building a set of 'allowed node' masks. We should 
then compare all the 'allowed node' masks to see how many unique node masks 
there are -- this corresponds to the number of NUMA nodes that we wish to 
expose to the guest if this guest has NUMA enabled. We would aportion the 
guest's pseudo-physical memory equally between these virtual NUMA nodes.

If guest NUMA is disabled, we just use a single node mask which is the union of 
the per-VCPU node masks.

Where allowed node masks span more than one physical node, we should allocate 
memory to the guest's virtual node by pseudo randomly striping memory 
allocations (in 2MB chunks) from across the specified physical nodes. [pseudo 
random is probably better than round robin]

Make sense? I can provide some worked exampled.

As regards the socket vs node terminology, I agree the variables are probably 
badly named and would perhaps best be called 'node' and 'supernode'. The key 
thing is that the toolstack should allow hierarchy to be expressed when 
specifying CPUs (using a dotted notation) rather than having to specify the 
enumerated CPU number.


Best,
Ian


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>