Cui, Dexuan wrote:
Hi Andre, will you re-post your patches?
Yes, I will do in the next days. I plan to add the missing automatic
assignment patch before posting.
Now I think for the first implementation, we can make things simple,
> e.g, we should specify how many guest nodes (the "guestnodes" option
> in your patch -- I think "numa_nodes", or "nodes", may be a better
> naming) the hvm guest will see, and we distribute guest memory and
> vcpus uniformly among the guest nodes.
I agree, making things simple in the first step was also my intention.
We have enough time to make it better later if we have more experience
with it.
To be honest, my first try also used "nodes" and later "numa_nodes" to
specify the number, but I learned that it confuses users who don't see
the difference between host and guest NUMA functionality. So I wanted to
make sure that it is clear that this number is from the guest's point of
view.
And we should add one more option "uniform_nodes" -- this boolean
> option's default value can be True, meaning if we can't construct
> uniform nodes to guest(e.g., on the related host node, no enough
memory as expected can be allocated to the guest), the guest
> creation should fail. This option is useful to users who want
> predictable guest performance.
I agree that we need to avoid missing user influence, although I'd
prefer to have the word "strict" somewhere in this name. As I wrote in
one my earlier mails, I'd opt for a single option describing the policy,
the "strict" meaning could be integrated in there:
numa_policy="strict|uniform|automatic|none|single|..."
Regards,
Andre.
--
Andre Przywara
AMD-Operating System Research Center (OSRC), Dresden, Germany
Tel: +49 351 488-3567-12
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|