This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Re: [Patch] support of cpu pools in xl

To: Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Re: [Patch] support of cpu pools in xl
From: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
Date: Fri, 01 Oct 2010 09:18:36 +0200
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
Delivery-date: Fri, 01 Oct 2010 00:21:27 -0700
Dkim-signature: v=1; a=rsa-sha256; c=simple/simple; d=ts.fujitsu.com; i=juergen.gross@xxxxxxxxxxxxxx; q=dns/txt; s=s1536b; t=1285917535; x=1317453535; h=message-id:date:from:mime-version:to:cc:subject: references:in-reply-to:content-transfer-encoding; z=Message-ID:=20<4CA58B4C.8070800@xxxxxxxxxxxxxx>|Date:=20 Fri,=2001=20Oct=202010=2009:18:36=20+0200|From:=20Juergen =20Gross=20<juergen.gross@xxxxxxxxxxxxxx>|MIME-Version: =201.0|To:=20Ian=20Campbell=20<Ian.Campbell@xxxxxxxxxxxxx >|CC:=20"xen-devel@xxxxxxxxxxxxxxxxxxx"=20<xen-devel@list s.xensource.com>,=20=0D=0A=20Ian=20Jackson=20<Ian.Jackson @eu.citrix.com>|Subject:=20Re:=20[Xen-devel]=20Re:=20[Pat ch]=20support=20of=20cpu=20pools=20in=20xl|References:=20 <4C930642.3080802@xxxxxxxxxxxxxx>=09<1284716808.16095.318 5.camel@xxxxxxxxxxxxxxxxxxxxxx>=09<4C9353EC.1060402@xxxxx jitsu.com>=09<1284724524.16095.3412.camel@xxxxxxxxxxxxxxx rce.com>=09<19603.36623.571453.446278@xxxxxxxxxxxxxxxxxxx e.com>=09<1284748120.15518.246.camel@xxxxxxxxxxxxxxxxxxxx n>=09<4C96EA0E.7090105@xxxxxxxxxxxxxx>=20<1285748918.1609 5.37655.camel@xxxxxxxxxxxxxxxxxxxxxx>|In-Reply-To:=20<128 5748918.16095.37655.camel@xxxxxxxxxxxxxxxxxxxxxx> |Content-Transfer-Encoding:=207bit; bh=RVDkW7ajsxAhnEPa9unUsYLXbKjSvYjQC+lI7mAcEE8=; b=fAsmpeMF4m1r4vNMbTj3v08fSJTe2tlsTorS/sJDHsKD0Ee+6EWvNFLC 6/vA0ElEpZABTUJnV2/FpaG06aHAk+LA1BimIltPIGHLR01lSxk6oW9NZ s5MmZkFokxkAVzYh6HXgJCfWaDc3tM3OVV54ddzFCa0j4PoA6EeuqRFOR g+K5PkMx7gAfWI9l3gDmJO81U9Xfl4zwZeXVVj+q2U3qyloDAiSF/AoSi o9wgaPMatODgL0BWtu8nNUYiWudGo;
Domainkey-signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns; h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV: Received:Received:Message-ID:Date:From:Organization: User-Agent:MIME-Version:To:CC:Subject:References: In-Reply-To:Content-Type:Content-Transfer-Encoding; b=F1SmjExIaU3c7o2QhSYj2zfApCchzFUiJyfTx3SS024f0KbyHqwKpks7 fOaATdplkMgp8Qxh6xGSJdawV6CFj2NCFM7Sz5jWrWthtAR1biedgLgUT 7wcfdzybjrbFcXsnbRhS89u5Qn63sHZe/9XgWqb+URfTX9korC/2dGc5O NrVILN+uvKOnT69tl64G4AJey52WUn4oJyXTUanLYiPGmAUbOYZ4j1cvl Ipk+jck4t3BicSp3IzXLcj7Od6Lpf;
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1285748918.16095.37655.camel@xxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Fujitsu Technology Solutions
References: <4C930642.3080802@xxxxxxxxxxxxxx> <1284716808.16095.3185.camel@xxxxxxxxxxxxxxxxxxxxxx> <4C9353EC.1060402@xxxxxxxxxxxxxx> <1284724524.16095.3412.camel@xxxxxxxxxxxxxxxxxxxxxx> <19603.36623.571453.446278@xxxxxxxxxxxxxxxxxxxxxxxx> <1284748120.15518.246.camel@xxxxxxxxxxxxxxxxxxxxx> <4C96EA0E.7090105@xxxxxxxxxxxxxx> <1285748918.16095.37655.camel@xxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv: Gecko/20100913 Iceowl/1.0b1 Icedove/3.0.7
On 09/29/10 10:28, Ian Campbell wrote:
(I'm just back from vacation, sorry for the delay replying)

On Mon, 2010-09-20 at 05:58 +0100, Juergen Gross wrote:
On 09/17/10 20:28, Ian Campbell wrote:
On Fri, 2010-09-17 at 16:53 +0100, Ian Jackson wrote:
Ian Campbell writes ("Re: [Xen-devel] Re: [Patch] support of cpu pools in xl"):
On Fri, 2010-09-17 at 12:41 +0100, Juergen Gross wrote:
I just wanted to be able to support some (inactive) cpupools without any
cpu allocated. It's just a number which should normally be large enough.

What is the purpose of these inactive cpupools?

Amongst other things, I would guess, the creation or removal of
cpupools !

"Inactive cpupools" were meant to be cpupools without any cpus and domains
assigned to them.
They can exist for a short time during creation and removal, but due to
explicitly removing all cpus, too.

That makes sense in itself but then why do you need to add a magic

I think libxl_list_pool should look more like libxl_list_domain, which
implies that the xc_cpupool_getinfo interface should not be changed as
in your previous patch since the new interface seems to preclude this
usage. You really need retain the first poolid + a max number of entries
+ return the actual number of entries used interface in order to have a
usable interface when there is no way to query the maximum pool id.

The problem with the interface you are trying to define is compounded by
the fact that the returned array is sparse and so in fact you will run
out of space at poolid == nr_cpus+32 rather than at number of pools ==
nr_cpus+32. (Note that in contrast libxl_list_domain returns a compact
array so that you run out of space at 1024 domains total, not domid

I think you misread the code. The returned array is NOT sparse. Please note
that the hypervisor will return the info of the next cpu pool with poolid
equal or larger as the requested one (that's the reason why poolid is a
vital return info).

IMHO libxl_list_{pool,domain} should also go realloc the buffer and go
around again in the case where the underlying xc call returned the
maximum number of entries -- since there may be more to come. Perhaps
this is less likely in the domain case (1024 domains is quite a lot, at
least today) but it seem more plausible in the pool case? I think this
is probably a separate issue though and getting the basic semantics of
xc_cpupool_getinfo/libxl_list_pool is more important.

I agree realloc-ing the buffer for the array in libxl_list_pool is a
better solution (now easy to do as the cpumasks are allocated separately).

I don't think so, libxl_create_cpupool returns a new poolid for a newly
created pool, so they are not needed for that.

They have a poolid, but there might be more cpupools than cpus in the system.
This was the reason for the "+ 32". But I agree, this should be done via a

I think it should be done by defining an interface which doesn't need
arbitrary magic numbers in the first place.

I'll resend modified patches.


Juergen Gross                 Principal Developer Operating Systems
TSP ES&S SWE OS6                       Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions              e-mail: juergen.gross@xxxxxxxxxxxxxx
Domagkstr. 28                           Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>