WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [PATCH 0/5] [POST-4.0]: RFC: HVM NUMA guest support

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, "Kamble, Nitin A" <nitin.a.kamble@xxxxxxxxx>
Subject: [Xen-devel] [PATCH 0/5] [POST-4.0]: RFC: HVM NUMA guest support
From: Andre Przywara <andre.przywara@xxxxxxx>
Date: Thu, 4 Feb 2010 22:50:30 +0100
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 04 Feb 2010 13:48:38 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 2.0.0.18 (X11/20081105)
Hi,
to avoid double work in the community on the same topic and to help
syncing on the subject and as I am not in office next week, I would like
to send the NUMA guest support patches I have so far.

These patches introduce NUMA support for guests. This can be handy if either the guests resources (VCPUs and/or memory) exceed one node's capacity or the host is already loaded so that the requirement cannot be satisfied from one node alone. Some applications may also benefit from the aggregated bandwidth of multiple memory controllers. Even if the guest has only a single node, this code replaces the current NUMA placement mechanism by moving it into libxc.

I have changed something lately, so there are some loose ends, but it
should suffice as a discussion base.

The patches are for HVM guest primarily, as I don't deal much with PV I am not sure whether a port would be straight-forward or the complexity is higher. One thing I was not sure about is how to communicate the NUMA topology to PV guests. Reusing the existing code base and inject a generated ACPI table seems smart, but this would mean to enable ACPI parsing code in PV Linux, which currently seems to be disabled (?). If someone wants to step in and implement PV support, I will be glad to help.

I have reworked the (guest node to) host node assignment part, this is
currently unfinished. I decided to move the node-rating part from
XendDomainInfo.py:find_relaxed_node() into libxc (should this eventually go into libxenlight?) to avoid passing to much information between the layers and to include libxl support. This code snippet (patch 5/5) basically scans all VCPUs on all domains and generates an array holding the node load metric for future sorting. The missing part is here a static function in xc_hvm_build.c to pick the <n> best nodes and populate the numainfo->guest_to_host_node array with the result. I will do this when I will be back.

For more details see the following email bodies.

Thanks and Regards,
Andre.

--
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany
Tel: +49 351 488-3567-12
----to satisfy European Law for business letters:
Advanced Micro Devices GmbH
Karl-Hammerschmidt-Str. 34, 85609 Dornach b. Muenchen
Geschaeftsfuehrer: Andrew Bowd; Thomas M. McCoy; Giuliano Meroni
Sitz: Dornach, Gemeinde Aschheim, Landkreis Muenchen
Registergericht Muenchen, HRB Nr. 43632


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel