This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] [RFC] Xen NUMA strategy

To: "Akio Takebe" <takebe_akio@xxxxxxxxxxxxxx>, "Andre Przywara" <andre.przywara@xxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] [RFC] Xen NUMA strategy
From: "Xu, Anthony" <anthony.xu@xxxxxxxxx>
Date: Tue, 18 Sep 2007 14:33:23 +0800
Delivery-date: Mon, 17 Sep 2007 23:34:58 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <54C7F9BA4B1341takebe_akio@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <46EA7906.2010504@xxxxxxx> <54C7F9BA4B1341takebe_akio@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acf5umQg/TXGkEp9Rja0eZWxrcXKfgAAfFFg
Thread-topic: [Xen-devel] [RFC] Xen NUMA strategy
>We may need to write something about guest NUMA in guest configuration
>For example, in guest configuration file;
>vnode = <a number of guest node>
>vcpu = [<vcpus# pinned into the node: machine node#>, ...]
>memory = [<amount of memory per node: machine node#>, ...]
>vnode = 2
>vcpu = [0-1:0, 2-3:1]
>memory = [128:0, 128:1]
>If we setup vnode=1, old OSes should work fine.

This is something we need to do.
But if user forget to configure guest NUMA in guest configuration file.
Xen needs to provide an optimized guest NUMA information based on
current workload on physical machine.
We need provide both, user configuration can override default

>And almost OSes read NUMA configuration only at booting and CPU/memory
>So if xen migrate vcpu, xen has to occur hotpulg event.
Guest should not know the vcpu migration, so xen doesn't trigger hotplug
event to guest.

Maybe we should not call it vcpu migration; we can call it vnode
Xen (maybe dom0 application) needs to migrate vnode ( include vcpus and
memorys) from a physical node to another physical node. The guest NUMA
topology is not changed, so Xen doesn't need to inform guest of the
vnode migration.

>It's costly. So pinning vcpu to node may be good.

>I think basicaly pinning a guest into a node is good.
>If the system becomes imbalanced, and we absolutely want
>to migration a guest, then xen temporarily migrate only vcpus,
>and we abandon the performance at that time.
As I mentioned above, it is not temporary migration. And it will not
impact performance, (it may impact the performance only at the process
of vnode migration)

And I think imbalanced is rare in VMM if user doesn't create and destroy
domain frequently. And VMs on VMM are far less than applications on

- Anthony

Xen-devel mailing list