WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [PATCH 0/4] hvm: NUMA guest support

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] [PATCH 0/4] hvm: NUMA guest support
From: Andre Przywara <andre.przywara@xxxxxxx>
Date: Fri, 4 Jul 2008 09:55:17 +0200
Delivery-date: Fri, 04 Jul 2008 00:56:11 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 1.5.0.10 (X11/20070409)
Hi all,

these patches introduce NUMA support for HVM guests. A new config option
'guestnodes' specifies the number of NUMA nodes the guest should see.
Memory will be allocated from different (host) nodes, CPU affinity will
be set accordingly and the guest will be educated about the topology via
an SRAT-ACPI table.
This will allow guests which are greater than one host node (both in
terms of number of VCPUs or total memory). On AMD Opteron platforms
guests otherwise may use non-optimal memory access (from remote nodes),
this somehow limits the number of VCPUS to the number of cores in one
socket (2 or 4). Another issue solved with this is "fragmented" memory,
where the total amount of free memory would be enough for a guest, but
it cannot be allocated from a single node. Overcommitting of the number
of nodes is currently not possible, so you need a NUMA machine to use this.
I have seen performance penalties of 7-12% on Opterons with kernbench on
guests with remote memory (numa=off or explicitly wrongly pinned).
Explicitly pinning guests with cpus="x-y", omitting the guestnodes
option or specifying guestnodes=0 will turn off the new code and revert
to current behavior (automatic placement).

It would be nice if this still finds its way into 3.3.
Please apply the following four patches in order, they should compile
and run after each patch. More details in the respective mails.

Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>

Regards,
Andre.

--
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany
Tel: +49 351 277-84917
----to satisfy European Law for business letters:
AMD Saxony Limited Liability Company & Co. KG,
Wilschdorfer Landstr. 101, 01109 Dresden, Germany
Register Court Dresden: HRA 4896, General Partner authorized
to represent: AMD Saxony LLC (Wilmington, Delaware, US)
General Manager of AMD Saxony LLC: Dr. Hans-R. Deppe, Thomas McCoy


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-devel] [PATCH 0/4] hvm: NUMA guest support, Andre Przywara <=