WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [PATCH 0/4] [HVM] NUMA support in HVM guests

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] [PATCH 0/4] [HVM] NUMA support in HVM guests
From: "Andre Przywara" <andre.przywara@xxxxxxx>
Date: Mon, 13 Aug 2007 12:01:04 +0200
Delivery-date: Mon, 13 Aug 2007 03:02:42 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 1.5.0.10 (X11/20070409)
Hi,

these four patches allow to forward NUMA characteristics into HVM
guests. This works by allocating memory explicitly from different NUMA nodes and create an appropriate SRAT-ACPI table which describes the topology. Needs a decent guest kernel which uses the SRAT table to discover the NUMA topology. This allows to break the current de-facto limitation of guests to one NUMA node, one can use more memory and/or more VCPUs than there are available on one node.

        Patch 1/4: introduce numanodes=n config file option.
this states how many NUMA nodes the guest should see, the default is 0, which means to turn off most parts of the code. Patch 2/4: introduce CPU affinity for allocate_physmap call. currently the correct NUMA node to take the memory from is chosen by simply using the currently scheduled CPU, this patch allows to explicitly specify a CPU and provides XENMEM_DEFAULT_CPU for the old behavior
        Patch 3/4: allocate memory with NUMA in mind.
actually look at the numanodes=n option to split the memory request up
into n parts and allocate it from different nodes. Also change the VCPUs
affinity to match the nodes.
        Patch 4/4: inject created SRAT table into the guest.
create a SRAT table, fill it up with the desired NUMA topology and
inject it into the guest

Applies against staging c/s #15719.

Signed-off-by: Andre Przywara <andre.przywara@xxxxxxx>

Regards,
Andre.

--
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany
Tel: +49 351 277-84917
----to satisfy European Law for business letters:
AMD Saxony Limited Liability Company & Co. KG
Sitz (Geschäftsanschrift): Wilschdorfer Landstr. 101, 01109 Dresden, Deutschland
Registergericht Dresden: HRA 4896
vertretungsberechtigter Komplementär: AMD Saxony LLC (Sitz Wilmington, Delaware, USA)
Geschäftsführer der AMD Saxony LLC: Dr. Hans-R. Deppe, Thomas McCoy



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-devel] [PATCH 0/4] [HVM] NUMA support in HVM guests, Andre Przywara <=