WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ia64-devel

[Xen-ia64-devel] [PATCH]Add free memory size of every NUMA node in phsic

To: "xen-ia64-devel" <xen-ia64-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-ia64-devel] [PATCH]Add free memory size of every NUMA node in phsical info
From: "Duan, Ronghui" <ronghui.duan@xxxxxxxxx>
Date: Tue, 26 Feb 2008 10:06:42 +0800
Delivery-date: Mon, 25 Feb 2008 18:08:10 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-ia64-devel-request@lists.xensource.com?subject=help>
List-id: Discussion of the ia64 port of Xen <xen-ia64-devel.lists.xensource.com>
List-post: <mailto:xen-ia64-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ia64-devel>, <mailto:xen-ia64-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-ia64-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Ach4HDtEX2I5G2khT8qFNzkaob1ixQ==
Thread-topic: [PATCH]Add free memory size of every NUMA node in phsical info

Returns free memory size per node in “xm info”. This info can help users who want to bind their guest domain in one node of their NUMA machines thought set CPU affinity. This is IA64 part support. It depends on X86 part patch which I have sent to Xen-dev mail-list.

 

Attachment: get_node_memory_ia64_dom0.patch
Description: get_node_memory_ia64_dom0.patch

Attachment: get_node_memory_ia64.patch
Description: get_node_memory_ia64.patch

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel
<Prev in Thread] Current Thread [Next in Thread>