Andre Przywara wrote:
> Cui, Dexuan wrote:
>> Hi Andre, will you re-post your patches?
> Yes, I will do in the next days. I plan to add the missing automatic
> assignment patch before posting.
Glad to know this.
BTW: To support PV NUMA, on this Monday, Dulloor posted some paches that change
libxc and the hypervisor, too.
Hi Andre, Dulloor,
I believe we should have some coordination to share the code and to avoid
duplicate efforts.
e.g., Dullor's linux-01-sync-interface.patch is similar to Andre's old patch
http://lists.xensource.com/archives/html/xen-devel/2008-07/msg00254.html,
though the formar is for PV kernel and the latter is for libxc and hypervisor.
:-)
e.g., Dullor's xen-02-exact-node-request.patch has implemented the the
MEMF_exact_node flag, which I intended to do. :-)
e.g., Dullor's xen-03-guest-numa-interface.patch implements a hypercall to
export host numa info -- actually Nitin has sent out a patch to export more
useful numa info:
http://old.nabble.com/Host-Numa-informtion-in-dom0-td27379527.html and I
suppose Nitin will re-send it soon.
e.g., Dullor's xen-04-node-mem-allocation.patch's xc_select_best_fit_nodes() is
similar to the Andre's xc_getnodeload():
http://lists.xensource.com/archives/html/xen-devel/2010-02/msg00284.html.
e.g., In Dulloer's xen-05-basic-cpumap-utils.patch and
xen-07-tools-arch-setup.patch, I think some parts could be shared by pv/hvm
numa implementations if we make some necessary changes to them.
>> Now I think for the first implementation, we can make things simple,
> > e.g, we should specify how many guest nodes (the "guestnodes"
> option > in your patch -- I think "numa_nodes", or "nodes", may be a
> better > naming) the hvm guest will see, and we distribute guest
> memory and > vcpus uniformly among the guest nodes.
> I agree, making things simple in the first step was also my intention.
> We have enough time to make it better later if we have more experience
> with it.
> To be honest, my first try also used "nodes" and later "numa_nodes" to
> specify the number, but I learned that it confuses users who don't see
> the difference between host and guest NUMA functionality. So I wanted
> to make sure that it is clear that this number is from the guest's
> point of view.
>
>> And we should add one more option "uniform_nodes" -- this boolean
> > option's default value can be True, meaning if we can't construct
> > uniform nodes to guest(e.g., on the related host node, no enough
>> memory as expected can be allocated to the guest), the guest
> > creation should fail. This option is useful to users who want
> > predictable guest performance.
> I agree that we need to avoid missing user influence, although I'd
> prefer to have the word "strict" somewhere in this name. As I wrote in
> one my earlier mails, I'd opt for a single option describing the
> policy, the "strict" meaning could be integrated in there:
> numa_policy="strict|uniform|automatic|none|single|..."
Hi Andre,
I think this looks too complex for the first simple implementation and it's
very likely a real user will be bewildered. :-)
I think ideally we can have 2 options:
guest_nodes=n
uniform_nodes=True|False (the default is True)
Thanks,
-- Dexuan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|