> Beside that I have to oppose the introduction of sockets_per_node again.
> Future AMD processors will feature _two_ nodes on _one_ socket, so this
> variable should hold 1/2, but this will be rounded to zero. I think this
> information is pretty useless anyway, as the number of sockets is mostly
> interesting for licensing purposes, where a single number is sufficient.
I sent a similar patch (was using to enlist pcpu-tuples and in
vcpu-pin/unpin) and I didn't pursue it because of this same argument.
When we talk of cpu topology, that's how it is currently :
nodes-socket-cpu-core. Don't sockets also figure in the cache and
interconnect hierarchy ?
What would be the hierarchy in those future AMD processors ? Even Keir
and Ian Pratt initially wanted the pcpu-tuples
to be listed that way. So, it would be helpful to make a call and move ahead.
-dulloor
On Mon, Feb 1, 2010 at 5:23 AM, Andre Przywara <andre.przywara@xxxxxxx> wrote:
> Kamble, Nitin A wrote:
>>
>> Hi Keir,
>>
>> Attached is the patch which exposes the host numa information to dom0.
>> With the patch “xm info” command now also gives the cpu topology & host numa
>> information. This will be later used to build guest numa support.
>
> What information are you missing from the current physinfo? As far as I can
> see, only the total amount of memory per node is not provided. But one could
> get this info from parsing the SRAT table in Dom0, which is at least mapped
> into Dom0's memory.
> Or do you want to provide NUMA information to all PV guests (but then it
> cannot be a sysctl)? This would be helpful, as this would avoid to enable
> ACPI parsing in PV Linux for NUMA guest support.
>
> Beside that I have to oppose the introduction of sockets_per_node again.
> Future AMD processors will feature _two_ nodes on _one_ socket, so this
> variable should hold 1/2, but this will be rounded to zero. I think this
> information is pretty useless anyway, as the number of sockets is mostly
> interesting for licensing purposes, where a single number is sufficient.
> For scheduling purposes cache topology is more important.
>
> My NUMA guest patches (currently for HVM only) are doing fine, I will try to
> send out a RFC patches this week. I think they don't interfere with this
> patch, but if you have other patches in development, we should sync on this.
> The scope of my patches is to let the user (or xend) describe a guest's
> topology (either by specifying only the number of guest nodes in the config
> file or by explicitly describing the whole NUMA topology). Some code will
> assign host nodes to the guest nodes (I am not sure yet whether this really
> belongs into xend as it currently does, or is better done in libxc, where
> libxenlight would also benefit).
> Then libxc's hvm_build_* will pass that info into the hvm_info_table, where
> code in the hvmloader will generate an appropriate SRAT table.
> An extension of this would be to let Xen automatically decide whether a
> split of the resources is necessary (because there is not enough memory
> available (anymore) on one node).
>
> Looking forward to comments...
>
> Regards,
> Andre.
>
> --
> Andre Przywara
> AMD-Operating System Research Center (OSRC), Dresden, Germany
> Tel: +49 351 448 3567 12
> ----to satisfy European Law for business letters:
> Advanced Micro Devices GmbH
> Karl-Hammerschmidt-Str. 34, 85609 Dornach b. Muenchen
> Geschaeftsfuehrer: Andrew Bowd; Thomas M. McCoy; Giuliano Meroni
> Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
> Registergericht Muenchen, HRB Nr. 43632
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|