This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Why is cpu-to-node mapping different between Xen 4.0.2-r

To: "Dante Cinco" <dantecinco@xxxxxxxxx>
Subject: Re: [Xen-devel] Why is cpu-to-node mapping different between Xen 4.0.2-rc1-pre and Xen 4.1-unstable?
From: "Jan Beulich" <JBeulich@xxxxxxxxxx>
Date: Thu, 28 Oct 2010 08:51:43 +0100
Cc: Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 28 Oct 2010 00:52:28 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <AANLkTi=F54yLztBZvwvwQtym+jwc4Mo-Hs6HiSP24joF@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <AANLkTi=F54yLztBZvwvwQtym+jwc4Mo-Hs6HiSP24joF@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>>> On 27.10.10 at 22:58, Dante Cinco <dantecinco@xxxxxxxxx> wrote:

This is apparently a result of the introduction of normalise_cpu_order().

> My system is a dual Xeon E5540 (Nehalem) HP Proliant DL380G6. When
> switching between Xen 4.0.2-rc1-pre and Xen 4.1-unstable I noticed
> that the NUMA info as shown by the Xen 'u' debug-key is different.
> More specifically, the CPU to node mapping is alternating for 4.0.2
> and grouped sequentially for 4.1. This difference affects the
> allocation (wrt node/socket) of pinned VCPUs to the guest domain. For
> example, if I'm allocating physical CPUs 0 - 3 to my guest domain, in
> 4.0.2 the 4 VCPUs will be split between the 2 nodes but in 4.1 the 4
> VCPUs will all be in node 0.

Use of pinning to pre-determined, hard coded numbers is quite
obviously dependent on hypervisor internal behavior (i.e. will
yield different results if the implementation changes).


Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>