|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel][PATCH]pcpu tuples [was Re: [Xen-devel] Xen 3.4.1 NUMA s
Dulloor wrote:
Attached is a patch to construct pcpu tuples of the form
(node.socket.core.thread), and (currently)used by xm vcpu-pin utility.
Without having looked further at the patch: There will be problems with
that notation. The assumption that one node consist of at least one
socket is no longer true with AMD's upcoming Magny-Cours processors,
which features _two_ nodes in one socket.
The socket information as it is of interest for licensing purposes and
for the voltage domains. I suppose that power aware scheduling is out of
scope for the current scheduler, so we could ignore the socket
information here at all.
Shared cache would be an interesting information to consider for
scheduling purposes, but again here the socket information is
misleading, as each node of the Magny-Cours processor has it's own L3
cache, there is no cache shared across the two nodes in one package.
Xen already detects the NUMA topology on the new system correctly:
nr_cpus : 48
nr_nodes : 8
cores_per_socket : 12
threads_per_core : 1
I don't know details about the usual IA64 topology, though.
I see currently these possible topologies for x86-64 systems:
Core2-based: 1 (fake) node, n sockets
AMD64/Nehalem: n nodes, 1 socket/node
AMD G34: n nodes, 2 or 1 nodes/socket(!)
That looks like that it will not be easy to combine all of those. One
possibility would be to join nodes and sockets into one entity (use
sockets on older systems (L2 cache domains) and nodes on AMD/newer Intel
systems (memory controller domains)). But I don't have a handy name for
that beast (left alone nockets ;-)
Although it can be quite useful to have such a notation, I am not sure
whether it will really help. Eventually you want to go away from manual
assignment (and be it at domain's runtime via "xm vcpu-pin").
Looking forward to any comments.
Regards,
Andre.
-dulloor
On Fri, Nov 13, 2009 at 11:02 AM, Keir Fraser <keir.fraser@xxxxxxxxxxxxx> wrote:
On 13/11/2009 15:40, "Ian Pratt" <Ian.Pratt@xxxxxxxxxxxxx> wrote:
Even better would be to have pCPUs addressable and listable explicitly as
dotted tuples. That can be implemented entirely within the toolstack, and
could even allow wildcarding of tuple components to efficiently express
cpumasks.
Yes, I'd certainly like to see the toolstack support dotted tuple notation.
However, I just don't trust the toolstack to get this right unless xen has
already set it up nicely for it with a sensible enumeration and defined
sockets-per-node, cores-per-socket and threads-per-core parameters. Xen should
provide a clean interface to the toolstack in this respect.
Xen provides a topology-interrogation hypercall which should suffice for
tools to build up a {node,socket,core,thread}<->cpuid mapping table.
-- Keir
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
--
Andre Przywara
AMD-OSRC (Dresden)
Tel: x29712
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|