WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: [Patch] update cpumask handling for cpu pools in lib

To: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Re: [Patch] update cpumask handling for cpu pools in libxc and python
From: Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx>
Date: Fri, 17 Sep 2010 10:44:34 +0100
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 17 Sep 2010 02:45:24 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4C9332EA.3030006@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Citrix Systems, Inc.
References: <4C9301DB.4050009@xxxxxxxxxxxxxx> <1284714037.16095.3083.camel@xxxxxxxxxxxxxxxxxxxxxx> <4C9332EA.3030006@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
On Fri, 2010-09-17 at 10:20 +0100, Juergen Gross wrote:
> On 09/17/10 11:00, Ian Campbell wrote:
> > On Fri, 2010-09-17 at 06:51 +0100, Juergen Gross wrote:
> >> Signed-off-by: juergen.gross@xxxxxxxxxxxxxx
> >>
> >> To be able to support arbitrary numbers of physical cpus it was
> >> necessary to
> >> include the size of cpumaps in the xc-interfaces for cpu pools.
> >> These were:
> >>    definition of xc_cpupoolinfo_t
> >>    xc_cpupool_getinfo()
> >>    xc_cpupool_freeinfo()
> >>
> >> diff -r d978675f3d53 tools/libxc/xc_cpupool.c
> >> --- a/tools/libxc/xc_cpupool.c  Thu Sep 16 18:29:26 2010 +0100
> >> +++ b/tools/libxc/xc_cpupool.c  Fri Sep 17 07:42:30 2010 +0200
> >> @@ -34,6 +34,20 @@ static int do_sysctl_save(xc_interface *
> >>       return ret;
> >>   }
> >>
> >> +static int get_cpumap_size(xc_interface *xch)
> >> +{
> >> +    static int cpumap_size = 0;
> >> +    xc_physinfo_t physinfo;
> >> +
> >> +    if ( cpumap_size )
> >> +        return cpumap_size;
> >> +
> >> +    if ( !xc_physinfo(xch,&physinfo) )
> >> +        cpumap_size = (physinfo.max_cpu_id + 8) / 8;
> >
> > The + 8 / 8 thing still jumps out at me every time I look at it. Perhaps
> > the intention would be clearer if written as:
> >     int nr_cpus = physinfo.max_cpu_id + 1;
> >     cpumap_size = (nr+cpus + 7) / 8;
> 
> I modified it to make it more clear.

Thanks.

> 
> > If xc_physinfo fails (which seems like a reasonably fatal type of error)
> > then this function returns 0 but the caller does not seem to check for
> > this case.
> 
> Okay, test added.
> 
> >
> >> +xc_cpupoolinfo_t *xc_cpupool_getinfo(xc_interface *xch,
> >> +                       uint32_t poolid)
> >>   {
> >> [...]
> >> +    local_size = get_cpumap_size(xch);
> >> +    cpumap_size = (local_size + 7) / 8;
> >
> > local_size has already been rounded up in get_cpumap_size. Do we really
> > need to do it again?
> 
> I've made it more clear that this is rounding to uint64.

Wouldn't that be "(.. + 63) / 8" then?
> 
> >
> >> +    size = sizeof(xc_cpupoolinfo_t) + cpumap_size * 8 + local_size;
> >
> > Why do we need both "cpumap_size * 8" and local_size additional bytes
> > here? Both contain the number of bytes necessary to contain a cpumap
> > bitmask and in fact I suspect they are both equal at this point (see
> > point about rounding above).
> 
> The hypervisor returns a cpumask based on bytes, the tools use uint64-based
> cpumasks.

Oh, I see, as well as xc_cpupool_info_t and the cpumap which it contains
being allocated together in a single buffer you are also including a
third buffer which is for local use in this function only but which is
included in the memory region returned to the caller (who doesn't know
about it). Seems a bit odd to me, why not just allocate it locally then
free it (or use alloca)?

Actually, when I complete my hypercall buffer patch this memory will
need to be separately allocated any way since it needs to come from a
special pool. I'd prefer it if you just used explicit separate
allocation for this buffer from the start.

>  In practice this is equivalent as long as multiple of 8 bytes are
> written by the hypervisor and the system is running little endian.
> I prefer a clean interface mapping here.

Using a single uint64 when there was a limit of 64 cpus made sense but
now that it is variable length why not just use bytes everywhere? It
would avoid a lot of confusion about what various size units are at
various points etc. You would avoid needing to translate between the
hypervisor and tools representations too, wouldn't you?

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel