|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] Why is cpu-to-node mapping different between Xen 4.0.2-r
>>> On 27.10.10 at 22:58, Dante Cinco <dantecinco@xxxxxxxxx> wrote:
This is apparently a result of the introduction of normalise_cpu_order().
> My system is a dual Xeon E5540 (Nehalem) HP Proliant DL380G6. When
> switching between Xen 4.0.2-rc1-pre and Xen 4.1-unstable I noticed
> that the NUMA info as shown by the Xen 'u' debug-key is different.
> More specifically, the CPU to node mapping is alternating for 4.0.2
> and grouped sequentially for 4.1. This difference affects the
> allocation (wrt node/socket) of pinned VCPUs to the guest domain. For
> example, if I'm allocating physical CPUs 0 - 3 to my guest domain, in
> 4.0.2 the 4 VCPUs will be split between the 2 nodes but in 4.1 the 4
> VCPUs will all be in node 0.
Use of pinning to pre-determined, hard coded numbers is quite
obviously dependent on hypervisor internal behavior (i.e. will
yield different results if the implementation changes).
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|