WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] [RFC] Xen NUMA strategy

To: "Xu, Anthony" <anthony.xu@xxxxxxxxx>, "Ian Pratt" <Ian.Pratt@xxxxxxxxxxxx>, "Akio Takebe" <takebe_akio@xxxxxxxxxxxxxx>, "Andre Przywara" <andre.przywara@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] [RFC] Xen NUMA strategy
From: "Ian Pratt" <Ian.Pratt@xxxxxxxxxxxx>
Date: Thu, 20 Sep 2007 10:56:52 +0100
Delivery-date: Thu, 20 Sep 2007 02:58:44 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <46EA7906.2010504@xxxxxxx><54C7F9BA4B1341takebe_akio@xxxxxxxxxxxxxx> <51CFAB8CB6883745AE7B93B3E084EBE2011113AE@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <8A87A9A84C201449A0C56B728ACF491E260723@xxxxxxxxxxxxxxxxxxxxxxxxxxx> <51CFAB8CB6883745AE7B93B3E084EBE20116EEDD@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acf5umQg/TXGkEp9Rja0eZWxrcXKfgAAfFFgAASYjkAAVXpjMAARxh+Q
Thread-topic: [Xen-devel] [RFC] Xen NUMA strategy
> >There may be some usage scenarios where having a large SMP guest that
> >spans multiple nodes would be desirable. However, there's a bunch of
> >scalability works that's required in Xen before this will really make
> >sense, and all of this is much higher priority (and more generally
> >useful) than figuring out how to expose NUMA topology to guests. I'd
> >definitely encourage looking at the guest scalability issues first.
>  
>       What have you said maybe true, many of guests have small numbers
> of vCPUs. In this situation, we need to pin guest to node for good
> performance.
> Pining guest to node may lead to imbalance after some creating and
> destroying guest. We also need to handle imbalance. Better host NUMA
> support is needed.

Localhost relocate is a crude way of doing this rebalancing today. Sure,
we can do better, but it's a solution.

>       Even we don't have big guest, we may also need to let guest span
> NUMA node.  For example, when we create a guest which has big memory,
> none of the NUMA node can satisfy the memory request, so this guest
has
> to span NUMA node. We need to provide guest the NUMA information.

In that far from optimal situation you'll likely to want to try and
rebalance things at some point later. Since no guest OS I'm aware of
understands dynamic NUMA information I seriously doubt there's any good
can come from telling it about the temporary situation. 

>       There is still very small NUMA node. May be one CPU per node, if
> guest has two vCPUs, we need provide guest NUMA information, and
> otherwise it will impact performance badly.

Ian



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel