On 06/07/2010 06:57, "Dulloor" <dulloor@xxxxxxxxx> wrote:
>> What are xc_cpumask (a libxc concept) related definitions doing in a
>> hypervisor public header? These aren't even used in this header file. Below
>> I suggest a vcpu_to_vnode[] array, which probably gets rid of the need for
>> this bitmask stuff anyway.
>
> Stale comment with xc_cpumask .. sorry !
> I did think of the vcpu_to_vnode array, but then we use the bitmask in
> hvm_info
> anyway (with vcpu_online). I thought I could atleast fold them into a
> single structure.
> I could change that if you insist.
I think overall vnode_to_vcpu[] is a better way round, unless the per-node
vcpu maps are really particularly handy for some reason.
>> A small number to be statically defined. Better to make your structure
>> extensible I think, perhaps including pointers out to vnode-indexed arrays?
> This structure is passed in hvm_info page. Should I use offset/len for these
> dynamic-sized, vnode-indexed arrays ?
The 'hvm_info page' is a slightly restrictive concept really. Actually the
hvm_info data gets plopped down at a fixed location below 1MB in the guest's
memory map, and you can just extend from there even across a page boundary.
I would simply include pointers out to the dynamically-sized arrays; and
their sizes should be implicit given nr_vnodes.
>> How do vnodes and mnodes differ? Why should a guest care about or need to
>> know about both, whatever they are?
> vnode_id is the node-id in the guest and mnode_id refers to the real node
> it maps to. Actually I don't need vnode_id. Will take that out.
Yes that's a completely pointless unnecessary distinction.
>>
>>> + uint32_t nr_pages;
>>
>> Not an address range? Is that implicitly worked out somehow? Should be
>> commented, but even better just a <start,end> range explicitly given?
>
> The node address ranges are assumed contiguous and increasing. I will
> change that to <start,end> ranges.
Thanks.
>>
>>> + struct xen_cpumask vcpu_mask; /* vnode_to_vcpumask */
>>> +};
>>
>> Why not have a single integer array vcpu_to_vnode[] in the main
>> xen_domain_numa_info structure?
>
> No specific reason, except that all the vnode-related info is
> folded into a single structure. I will change that if you insist.
Personally I think it it would be neater to change it. A whole bunch of
cpumask machinery disappears.
-- Keir
>>
>>> +#define XEN_DOM_NUMA_INTERFACE_VERSION 0x01
>>> +
>>> +#define XEN_DOM_NUMA_CONFINE 0x01
>>> +#define XEN_DOM_NUMA_SPLIT 0x02
>>> +#define XEN_DOM_NUMA_STRIPE 0x03
>>> +#define XEN_DOM_NUMA_DONTCARE 0x04
>>
>> What should the guest do with these? You're rather light on comments in this
>> critical interface-defining header file.
> I will add comments. The intent is to share this information with the
> hypervisor
> and PV guests (for ballooning).
>
>>
>>> +struct xen_domain_numa_info {
>>> + uint8_t version;
>>> + uint8_t type;
>>> +
>>> + uint8_t nr_vcpus;
>>> + uint8_t nr_vnodes;
>>> +
>>> + /* XXX: hvm_info_table uses 32-bit for high_mem_pgend,
>>> + * so we should be fine 32-bits too*/
>>> + uint32_t nr_pages;
>>
>> If this is going to be visible outside HVMloader (e.g., in PV guests) then
>> just make it a uint64_aligned_t and be done with it.
>
> Will do that.
>>
>>> + /* Only (nr_vnodes) entries are filled */
>>> + struct xen_vnode_info vnode_info[XEN_MAX_VNODES];
>>> + /* Only (nr_vnodes*nr_vnodes) entries are filled */
>>> + uint8_t vnode_distance[XEN_MAX_VNODES*XEN_MAX_VNODES];
>>
>> As suggested above, make these pointers out to dynamic-sized arrays. No need
>> for XEN_MAX_VNODES at all.
>
> In general, I realise I should add more comments.
>>
>> -- Keir
>>
>>> +};
>>> +
>>> +#endif
>>
>> On 05/07/2010 09:52, "Dulloor" <dulloor@xxxxxxxxx> wrote:
>>
>>> oops .. sorry, here it is.
>>>
>>> -dulloor
>>>
>>> On Mon, Jul 5, 2010 at 12:39 AM, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
>>> wrote:
>>>> This patch is incomplete.
>>>>
>>>>
>>>> On 03/07/2010 00:54, "Dulloor" <dulloor@xxxxxxxxx> wrote:
>>>>
>>>>> Implement the structure that will be shared with hvmloader (with HVMs)
>>>>> and directly with the VMs (with PV).
>>>>>
>>>>> -dulloor
>>>>>
>>>>> Signed-off-by : Dulloor <dulloor@xxxxxxxxx>
>>>>
>>>>
>>>>
>>
>>
>>
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|