WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [XEN][vNUMA][PATCH 3/9] public interface

To: Dulloor <dulloor@xxxxxxxxx>
Subject: Re: [Xen-devel] [XEN][vNUMA][PATCH 3/9] public interface
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Tue, 6 Jul 2010 13:57:02 +0100
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 06 Jul 2010 05:58:13 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <AANLkTinMEypyPK_R6eBgx_7ar95SUKQWSHczEhINDuzQ@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: Acsc0C9/78DIPEDUQzK7oAHidBaOSAAOoqlP
Thread-topic: [Xen-devel] [XEN][vNUMA][PATCH 3/9] public interface
User-agent: Microsoft-Entourage/12.24.0.100205
On 06/07/2010 06:57, "Dulloor" <dulloor@xxxxxxxxx> wrote:

>> What are xc_cpumask (a libxc concept) related definitions doing in a
>> hypervisor public header? These aren't even used in this header file. Below
>> I suggest a vcpu_to_vnode[] array, which probably gets rid of the need for
>> this bitmask stuff anyway.
> 
> Stale comment with xc_cpumask .. sorry !
> I did think of the vcpu_to_vnode array, but then we use the bitmask in
> hvm_info
> anyway (with vcpu_online). I thought I could atleast fold them into a
> single structure.
> I could change that if you insist.

I think overall vnode_to_vcpu[] is a better way round, unless the per-node
vcpu maps are really particularly handy for some reason.

>> A small number to be statically defined. Better to make your structure
>> extensible I think, perhaps including pointers out to vnode-indexed arrays?
> This structure is passed in hvm_info page. Should I use offset/len for these
> dynamic-sized, vnode-indexed arrays ?

The 'hvm_info page' is a slightly restrictive concept really. Actually the
hvm_info data gets plopped down at a fixed location below 1MB in the guest's
memory map, and you can just extend from there even across a page boundary.
I would simply include pointers out to the dynamically-sized arrays; and
their sizes should be implicit given nr_vnodes.

>> How do vnodes and mnodes differ? Why should a guest care about or need to
>> know about both, whatever they are?
> vnode_id is the node-id in the guest and mnode_id refers to the real node
> it maps to. Actually I don't need vnode_id. Will take that out.

Yes that's a completely pointless unnecessary distinction.

>> 
>>> +    uint32_t nr_pages;
>> 
>> Not an address range? Is that implicitly worked out somehow? Should be
>> commented, but even better just a <start,end> range explicitly given?
> 
> The node address ranges are assumed contiguous and increasing. I will
> change that to <start,end> ranges.

Thanks.

>> 
>>> +    struct xen_cpumask vcpu_mask; /* vnode_to_vcpumask */
>>> +};
>> 
>> Why not have a single integer array vcpu_to_vnode[] in the main
>> xen_domain_numa_info structure?
> 
> No specific reason, except that all the vnode-related info is
> folded into a single structure. I will change that if you insist.

Personally I think it it would be neater to change it. A whole bunch of
cpumask machinery disappears.

 -- Keir

>> 
>>> +#define XEN_DOM_NUMA_INTERFACE_VERSION  0x01
>>> +
>>> +#define XEN_DOM_NUMA_CONFINE    0x01
>>> +#define XEN_DOM_NUMA_SPLIT      0x02
>>> +#define XEN_DOM_NUMA_STRIPE     0x03
>>> +#define XEN_DOM_NUMA_DONTCARE   0x04
>> 
>> What should the guest do with these? You're rather light on comments in this
>> critical interface-defining header file.
> I will add comments. The intent is to share this information with the
> hypervisor
> and PV guests (for ballooning).
> 
>> 
>>> +struct xen_domain_numa_info {
>>> +    uint8_t version;
>>> +    uint8_t type;
>>> +
>>> +    uint8_t nr_vcpus;
>>> +    uint8_t nr_vnodes;
>>> +
>>> +    /* XXX: hvm_info_table uses 32-bit for high_mem_pgend,
>>> +     * so we should be fine 32-bits too*/
>>> +    uint32_t nr_pages;
>> 
>> If this is going to be visible outside HVMloader (e.g., in PV guests) then
>> just make it a uint64_aligned_t and be done with it.
> 
> Will do that.
>> 
>>> +    /* Only (nr_vnodes) entries are filled */
>>> +    struct xen_vnode_info vnode_info[XEN_MAX_VNODES];
>>> +    /* Only (nr_vnodes*nr_vnodes) entries are filled */
>>> +    uint8_t vnode_distance[XEN_MAX_VNODES*XEN_MAX_VNODES];
>> 
>> As suggested above, make these pointers out to dynamic-sized arrays. No need
>> for XEN_MAX_VNODES at all.
> 
> In general, I realise I should add more comments.
>> 
>>  -- Keir
>> 
>>> +};
>>> +
>>> +#endif
>> 
>> On 05/07/2010 09:52, "Dulloor" <dulloor@xxxxxxxxx> wrote:
>> 
>>> oops .. sorry, here it is.
>>> 
>>> -dulloor
>>> 
>>> On Mon, Jul 5, 2010 at 12:39 AM, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
>>> wrote:
>>>> This patch is incomplete.
>>>> 
>>>> 
>>>> On 03/07/2010 00:54, "Dulloor" <dulloor@xxxxxxxxx> wrote:
>>>> 
>>>>> Implement the structure that will be shared with hvmloader (with HVMs)
>>>>> and directly with the VMs (with PV).
>>>>> 
>>>>> -dulloor
>>>>> 
>>>>> Signed-off-by : Dulloor <dulloor@xxxxxxxxx>
>>>> 
>>>> 
>>>> 
>> 
>> 
>> 



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel