WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Host Numa informtion in dom0

To: Andre Przywara <andre.przywara@xxxxxxx>
Subject: Re: [Xen-devel] Host Numa informtion in dom0
From: Dulloor <dulloor@xxxxxxxxx>
Date: Mon, 1 Feb 2010 12:53:37 -0500
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Delivery-date: Mon, 01 Feb 2010 09:54:00 -0800
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:mime-version:received:in-reply-to:references :date:message-id:subject:from:to:cc:content-type :content-transfer-encoding; bh=eb+wbX/5sq/h4AK/DEYQRNYIoApJJfdPpfDZKGCzuFM=; b=VBCxzknG6JcsziR7sPr+CC+6gFdYCeFC2UNLP3+u0MeRJE2GcTq5GEjhSy0iacV9Y8 10sWkcqDfFHUZnNC31We0cWkNro+gH6cLvU8+zRd4o2/GiBvmXAXWhNmXRHdgcZS5bS+ 3n7GhHOuQxup2pKKgGmAkuLujivbb2DTfpSag=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :cc:content-type:content-transfer-encoding; b=DdA5U5g75IDmOGroW5iVVx9uMUypJEpPZC4yNyYT8ChpViyLWWUKCLz5FiIAds71nn gq6cTHoY4/9O1pKF+tnZ1c93nw7yi74Ekn1ksmNuRxGxWtaeBUm7KEym0DruxkrynnVV mBRKOerUXUJ1YJ/9lIvO82c+C46XOjFcI69i8=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4B66AB88.6090208@xxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <8EA2C2C4116BF44AB370468FBF85A7770123904A29@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <4B66AB88.6090208@xxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
> Beside that I have to oppose the introduction of sockets_per_node again.
> Future AMD processors will feature _two_ nodes on _one_ socket, so this
> variable should hold 1/2, but this will be rounded to zero. I think this
> information is pretty useless anyway, as the number of sockets is mostly
> interesting for licensing purposes, where a single number is sufficient.

I sent a similar patch (was using to enlist pcpu-tuples and in
vcpu-pin/unpin) and I didn't pursue it because of this same argument.
When we talk of cpu topology, that's how it is currently :
nodes-socket-cpu-core. Don't sockets also figure in the cache and
interconnect hierarchy ?
What would be the hierarchy in those future AMD processors ? Even Keir
and Ian Pratt initially wanted the pcpu-tuples
to be listed that way. So, it would be helpful to make a call and move ahead.

-dulloor


On Mon, Feb 1, 2010 at 5:23 AM, Andre Przywara <andre.przywara@xxxxxxx> wrote:
> Kamble, Nitin A wrote:
>>
>> Hi Keir,
>>
>>   Attached is the patch which exposes the host numa information to dom0.
>> With the patch “xm info” command now also gives the cpu topology & host numa
>> information. This will be later used to build guest numa support.
>
> What information are you missing from the current physinfo? As far as I can
> see, only the total amount of memory per node is not provided. But one could
> get this info from parsing the SRAT table in Dom0, which is at least mapped
> into Dom0's memory.
> Or do you want to provide NUMA information to all PV guests (but then it
> cannot be a sysctl)? This would be helpful, as this would avoid to enable
> ACPI parsing in PV Linux for NUMA guest support.
>
> Beside that I have to oppose the introduction of sockets_per_node again.
> Future AMD processors will feature _two_ nodes on _one_ socket, so this
> variable should hold 1/2, but this will be rounded to zero. I think this
> information is pretty useless anyway, as the number of sockets is mostly
> interesting for licensing purposes, where a single number is sufficient.
>  For scheduling purposes cache topology is more important.
>
> My NUMA guest patches (currently for HVM only) are doing fine, I will try to
> send out a RFC patches this week. I think they don't interfere with this
> patch, but if you have other patches in development, we should sync on this.
> The scope of my patches is to let the user (or xend) describe a guest's
>  topology (either by specifying only the number of guest nodes in the config
> file or by explicitly describing the whole NUMA topology). Some code will
> assign host nodes to the guest nodes (I am not sure yet whether this really
> belongs into xend as it currently does, or is better done in libxc, where
> libxenlight would also benefit).
> Then libxc's hvm_build_* will pass that info into the hvm_info_table, where
> code in the hvmloader will generate an appropriate SRAT table.
> An extension of this would be to let Xen automatically decide whether a
> split of the resources is necessary (because there is not enough memory
> available (anymore) on one node).
>
> Looking forward to comments...
>
> Regards,
> Andre.
>
> --
> Andre Przywara
> AMD-Operating System Research Center (OSRC), Dresden, Germany
> Tel: +49 351 448 3567 12
> ----to satisfy European Law for business letters:
> Advanced Micro Devices GmbH
> Karl-Hammerschmidt-Str. 34, 85609 Dornach b. Muenchen
> Geschaeftsfuehrer: Andrew Bowd; Thomas M. McCoy; Giuliano Meroni
> Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
> Registergericht Muenchen, HRB Nr. 43632
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>