WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: [Patch] support of cpu pools in xl

(I'm just back from vacation, sorry for the delay replying)

On Mon, 2010-09-20 at 05:58 +0100, Juergen Gross wrote:
> On 09/17/10 20:28, Ian Campbell wrote:
> > On Fri, 2010-09-17 at 16:53 +0100, Ian Jackson wrote:
> >> Ian Campbell writes ("Re: [Xen-devel] Re: [Patch] support of cpu pools in 
> >> xl"):
> >>> On Fri, 2010-09-17 at 12:41 +0100, Juergen Gross wrote:
> >>>> I just wanted to be able to support some (inactive) cpupools without any
> >>>> cpu allocated. It's just a number which should normally be large enough.
> >>>
> >>> What is the purpose of these inactive cpupools?
> >>
> >> Amongst other things, I would guess, the creation or removal of
> >> cpupools !
> 
> "Inactive cpupools" were meant to be cpupools without any cpus and domains
> assigned to them.
> They can exist for a short time during creation and removal, but due to
> explicitly removing all cpus, too.

That makes sense in itself but then why do you need to add a magic
number?

I think libxl_list_pool should look more like libxl_list_domain, which
implies that the xc_cpupool_getinfo interface should not be changed as
in your previous patch since the new interface seems to preclude this
usage. You really need retain the first poolid + a max number of entries
+ return the actual number of entries used interface in order to have a
usable interface when there is no way to query the maximum pool id.

The problem with the interface you are trying to define is compounded by
the fact that the returned array is sparse and so in fact you will run
out of space at poolid == nr_cpus+32 rather than at number of pools ==
nr_cpus+32. (Note that in contrast libxl_list_domain returns a compact
array so that you run out of space at 1024 domains total, not domid
1024).

IMHO libxl_list_{pool,domain} should also go realloc the buffer and go
around again in the case where the underlying xc call returned the
maximum number of entries -- since there may be more to come. Perhaps
this is less likely in the domain case (1024 domains is quite a lot, at
least today) but it seem more plausible in the pool case? I think this
is probably a separate issue though and getting the basic semantics of
xc_cpupool_getinfo/libxl_list_pool is more important.

> > I don't think so, libxl_create_cpupool returns a new poolid for a newly
> > created pool, so they are not needed for that.
> 
> They have a poolid, but there might be more cpupools than cpus in the system.
> This was the reason for the "+ 32". But I agree, this should be done via a
> #define.

I think it should be done by defining an interface which doesn't need
arbitrary magic numbers in the first place.

Ian.



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel