Hi Andre,
I read your patches and Anthony's commands. Write a patch based on
1: If guest set numanodes=n (default it will be 1 means that this
guest will be restricted in one node); hypervisor will choose
begin node to pin for this guest use round robin. But the method I use
need a spin_lock to prevent create domain at same time. Are there any
more good methods, hope for your suggestion.
2: pass node parameter use higher bits in flags when create domain.
At this time, domain can record node information in domain struct
for further use, i.e. show which node to pin when setup_guest.
If use this method, in your patch, can simply balance nodes just
like below;
> + for (i=0;i<=dominfo.max_vcpu_id;i++)
> + {
> + node= ( i * numanodes ) / (dominfo.max_vcpu_id+1)+
> + domaininfo.first_node;
> + xc_vcpu_setaffinity (xc_handle, dom, i, nodemasks[node]);
> + }
>
BTW: I can't find your mail of Patch 2/4: introduce CPU affinity
for allocate_physmap call, so I can't add your patch on source.
I just begin my "NUMA trip", appreciate you suggestions. Thanks.
Best Regards
Ronghui
-----Original Message-----
From: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
[mailto:xen-devel-bounces@xxxxxxxxxxxxxxxxxxx] On Behalf Of Xu, Anthony
Sent: Monday, September 10, 2007 9:14 AM
To: Andre Przywara
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-devel] [PATCH 0/4] [HVM] NUMA support in HVM guests
Andre
>>
>> This always starts from node0, this may make node0 very busy, while
other
>nodes may not have many work.
>This is true, I encountered this before, but didn't want to wait longer
>for sending up the patches. Actually the "numanodes=n" config file
>option shouldn't specify the number of nodes, but a list of specific
>nodes to use, like "numanodes=0,2" to pin the domain on the first and
>the third node.
That's a good idea to specify the nodes to use,
We can use "numamodes=0,2" in configure file, and it will be converted
into bitmap long numamodes, every bit indicates one node.
When guest doesn't specify "numamodes", XEN will need to choose proper
nodes for guest. So XEN also needs to implement some algorithm to choose
proper nodes.
>> We also need to add some limitations for numanodes. The number of
vcpus
>on vnode should not be larger
> >than the number of pcpus on pnode. Otherwise vcpus belonging to a
>domain run
> > on the same pcpu, which is not what we want.
>Would be nice, but in the moment I would push this into the sysadmin's
>responsibility.
It's reasonable.
>After all my patches were more a discussion base than a final solution,
>so I see there is more work to do. In the moment I am working on
>including PV guests.
>
That's a very good start for support guest NUMA.
Regards
- Anthony
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|