xen-devel
RE: [Xen-devel] [RFC] Xen NUMA strategy
> >We may need to write something about guest NUMA in guest
configuration
> file.
> >For example, in guest configuration file;
> >vnode = <a number of guest node>
> >vcpu = [<vcpus# pinned into the node: machine node#>, ...]
> >memory = [<amount of memory per node: machine node#>, ...]
> >
> >e.g.
> >vnode = 2
> >vcpu = [0-1:0, 2-3:1]
> >memory = [128:0, 128:1]
> >
> >If we setup vnode=1, old OSes should work fine.
We need to think carefully about NUMA use cases before implementing a
bunch of mechanism.
The way I see it, in most situations it will not make sense for guests
to span NUMA nodes: you'll have a number of guests with relatively small
numbers of vCPUs, and it probably makes sense to allow the guests to be
pinned to nodes. What we have in Xen today works pretty well for this
case, but we could make configuration easier by looking at more
sophisticated mechanisms for specifying CPU groups rather than just
pinning. Migration between nodes could be handled with a locahost
migrate, but we could obviously come up with something more time/space
efficient (particularly for HVM gusts) if required.
There may be some usage scenarios where having a large SMP guest that
spans multiple nodes would be desirable. However, there's a bunch of
scalability works that's required in Xen before this will really make
sense, and all of this is much higher priority (and more generally
useful) than figuring out how to expose NUMA topology to guests. I'd
definitely encourage looking at the guest scalability issues first.
Thanks,
Ian
> This is something we need to do.
> But if user forget to configure guest NUMA in guest configuration
file.
> Xen needs to provide an optimized guest NUMA information based on
> current workload on physical machine.
> We need provide both, user configuration can override default
> configuration.
>
>
>
>
> >
> >And almost OSes read NUMA configuration only at booting and
CPU/memory
> >hotplug.
> >So if xen migrate vcpu, xen has to occur hotpulg event.
> Guest should not know the vcpu migration, so xen doesn't trigger
> hotplug
> event to guest.
>
> Maybe we should not call it vcpu migration; we can call it vnode
> migration.
> Xen (maybe dom0 application) needs to migrate vnode ( include vcpus
and
> memorys) from a physical node to another physical node. The guest NUMA
> topology is not changed, so Xen doesn't need to inform guest of the
> vnode migration.
>
>
> >It's costly. So pinning vcpu to node may be good.
> Agree
>
> >I think basicaly pinning a guest into a node is good.
> >If the system becomes imbalanced, and we absolutely want
> >to migration a guest, then xen temporarily migrate only vcpus,
> >and we abandon the performance at that time.
> As I mentioned above, it is not temporary migration. And it will not
> impact performance, (it may impact the performance only at the process
> of vnode migration)
>
>
> And I think imbalanced is rare in VMM if user doesn't create and
> destroy
> domain frequently. And VMs on VMM are far less than applications on
> machine.
>
> - Anthony
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|