|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] Re: [Patch] support of cpu pools in xl
On 09/29/10 10:28, Ian Campbell wrote:
(I'm just back from vacation, sorry for the delay replying)
On Mon, 2010-09-20 at 05:58 +0100, Juergen Gross wrote:
On 09/17/10 20:28, Ian Campbell wrote:
On Fri, 2010-09-17 at 16:53 +0100, Ian Jackson wrote:
Ian Campbell writes ("Re: [Xen-devel] Re: [Patch] support of cpu pools in xl"):
On Fri, 2010-09-17 at 12:41 +0100, Juergen Gross wrote:
I just wanted to be able to support some (inactive) cpupools without any
cpu allocated. It's just a number which should normally be large enough.
What is the purpose of these inactive cpupools?
Amongst other things, I would guess, the creation or removal of
cpupools !
"Inactive cpupools" were meant to be cpupools without any cpus and domains
assigned to them.
They can exist for a short time during creation and removal, but due to
explicitly removing all cpus, too.
That makes sense in itself but then why do you need to add a magic
number?
I think libxl_list_pool should look more like libxl_list_domain, which
implies that the xc_cpupool_getinfo interface should not be changed as
in your previous patch since the new interface seems to preclude this
usage. You really need retain the first poolid + a max number of entries
+ return the actual number of entries used interface in order to have a
usable interface when there is no way to query the maximum pool id.
The problem with the interface you are trying to define is compounded by
the fact that the returned array is sparse and so in fact you will run
out of space at poolid == nr_cpus+32 rather than at number of pools ==
nr_cpus+32. (Note that in contrast libxl_list_domain returns a compact
array so that you run out of space at 1024 domains total, not domid
1024).
I think you misread the code. The returned array is NOT sparse. Please note
that the hypervisor will return the info of the next cpu pool with poolid
equal or larger as the requested one (that's the reason why poolid is a
vital return info).
IMHO libxl_list_{pool,domain} should also go realloc the buffer and go
around again in the case where the underlying xc call returned the
maximum number of entries -- since there may be more to come. Perhaps
this is less likely in the domain case (1024 domains is quite a lot, at
least today) but it seem more plausible in the pool case? I think this
is probably a separate issue though and getting the basic semantics of
xc_cpupool_getinfo/libxl_list_pool is more important.
I agree realloc-ing the buffer for the array in libxl_list_pool is a
better solution (now easy to do as the cpumasks are allocated separately).
I don't think so, libxl_create_cpupool returns a new poolid for a newly
created pool, so they are not needed for that.
They have a poolid, but there might be more cpupools than cpus in the system.
This was the reason for the "+ 32". But I agree, this should be done via a
#define.
I think it should be done by defining an interface which doesn't need
arbitrary magic numbers in the first place.
I'll resend modified patches.
Juergen
--
Juergen Gross Principal Developer Operating Systems
TSP ES&S SWE OS6 Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions e-mail: juergen.gross@xxxxxxxxxxxxxx
Domagkstr. 28 Internet: ts.fujitsu.com
D-80807 Muenchen Company details: ts.fujitsu.com/imprint.html
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- Re: [Xen-devel] Re: [Patch] support of cpu pools in xl,
Juergen Gross <=
|
|
|
|
|