WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: [Patch] update cpumask handling for cpu pools in lib

To: Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Re: [Patch] update cpumask handling for cpu pools in libxc and python
From: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
Date: Fri, 17 Sep 2010 12:08:51 +0200
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 17 Sep 2010 03:09:37 -0700
Dkim-signature: v=1; a=rsa-sha256; c=simple/simple; d=ts.fujitsu.com; i=juergen.gross@xxxxxxxxxxxxxx; q=dns/txt; s=s1536b; t=1284718146; x=1316254146; h=message-id:date:from:mime-version:to:cc:subject: references:in-reply-to; z=Message-ID:=20<4C933E33.7090000@xxxxxxxxxxxxxx>|Date:=20 Fri,=2017=20Sep=202010=2012:08:51=20+0200|From:=20Juergen =20Gross=20<juergen.gross@xxxxxxxxxxxxxx>|MIME-Version: =201.0|To:=20Ian=20Campbell=20<Ian.Campbell@xxxxxxxxxxxxx >|CC:=20"xen-devel@xxxxxxxxxxxxxxxxxxx"=20<xen-devel@list s.xensource.com>|Subject:=20Re:=20[Xen-devel]=20Re:=20[Pa tch]=20update=20cpumask=20handling=20for=20cpu=20pools=0D =0A=20in=20libxc=20and=20python|References:=20<4C9301DB.4 050009@xxxxxxxxxxxxxx>=09<1284714037.16095.3083.camel@zak az.uk.xensource.com>=09<4C9332EA.3030006@xxxxxxxxxxxxxx> =20<1284716674.16095.3180.camel@xxxxxxxxxxxxxxxxxxxxxx> |In-Reply-To:=20<1284716674.16095.3180.camel@xxxxxxxxxxxx source.com>; bh=DAtVfhR24pChQb7SCuK7uDq19r6wskeChNJ2DOAcxZI=; b=u/Wn7hqf474Tc9nVh5Udk1W4Ka30/Jg1md+LO4V4OtvNAV9ytKtlrbZY KGkuEciG+SuRBUIDZ5ndD44weeXSPd4LtS+az5VV9lmqKArOh2fEqFoEJ aVIgouEHUxeBL0960bD2HJ4u6mwsDMPvjWj4/rDnXrtGepxmBjtZ8Mx58 FnpvPS+aasjFzLzVrm0KAxuE0GzoL5+Dnf/BJKLkRaKchjl9zu3+RUJXs Lstmv9bI3UyIw/432Hs9wFljVCmag;
Domainkey-signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns; h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV: Received:Received:Message-ID:Date:From:Organization: User-Agent:MIME-Version:To:CC:Subject:References: In-Reply-To:Content-Type; b=AmRNHUaJN5mKy8L9TSCYsHbEQyfuMCp22sm7JRBAMtvp7PnUBWBMC8lt xp/p1prcyShbFslO++hNFxQchq1jgyVPl/v3zLpkbW3FPRtW/iPlQJUoY hpl4GzJ3p6sk66m8IUjTmZ3ShOtBA1DaDFQTFmoZpKxzs0V3pe1GNNwSS HSqkLUViMp0epZu1pFVGqzttXt5copOcqsR/zW+ldrav0jhRLz2IOFctY 8reIcaYhkLOF6SGqxzvxB0Ka/l1X8;
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1284716674.16095.3180.camel@xxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Fujitsu Technology Solutions
References: <4C9301DB.4050009@xxxxxxxxxxxxxx> <1284714037.16095.3083.camel@xxxxxxxxxxxxxxxxxxxxxx> <4C9332EA.3030006@xxxxxxxxxxxxxx> <1284716674.16095.3180.camel@xxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.11) Gecko/20100805 Iceowl/1.0b1 Icedove/3.0.6
Please ignore previous mail, I hit the send-button too early...

On 09/17/10 11:44, Ian Campbell wrote:
On Fri, 2010-09-17 at 10:20 +0100, Juergen Gross wrote:
On 09/17/10 11:00, Ian Campbell wrote:
local_size has already been rounded up in get_cpumap_size. Do we really
need to do it again?

I've made it more clear that this is rounding to uint64.

Wouldn't that be "(.. + 63) / 8" then?

No, local_size is already bytes...



+    size = sizeof(xc_cpupoolinfo_t) + cpumap_size * 8 + local_size;

Why do we need both "cpumap_size * 8" and local_size additional bytes
here? Both contain the number of bytes necessary to contain a cpumap
bitmask and in fact I suspect they are both equal at this point (see
point about rounding above).

The hypervisor returns a cpumask based on bytes, the tools use uint64-based
cpumasks.

Oh, I see, as well as xc_cpupool_info_t and the cpumap which it contains
being allocated together in a single buffer you are also including a
third buffer which is for local use in this function only but which is
included in the memory region returned to the caller (who doesn't know
about it). Seems a bit odd to me, why not just allocate it locally then
free it (or use alloca)?

Actually, when I complete my hypercall buffer patch this memory will
need to be separately allocated any way since it needs to come from a
special pool. I'd prefer it if you just used explicit separate
allocation for this buffer from the start.

Okay.


  In practice this is equivalent as long as multiple of 8 bytes are
written by the hypervisor and the system is running little endian.
I prefer a clean interface mapping here.

Using a single uint64 when there was a limit of 64 cpus made sense but
now that it is variable length why not just use bytes everywhere? It
would avoid a lot of confusion about what various size units are at
various points etc. You would avoid needing to translate between the
hypervisor and tools representations too, wouldn't you?

This would suggest changing xc_vcpu_setaffinity() and xc_vcpu_getaffinity(),
too.

--
Juergen Gross                 Principal Developer Operating Systems
TSP ES&S SWE OS6                       Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions              e-mail: juergen.gross@xxxxxxxxxxxxxx
Domagkstr. 28                           Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

Attachment: cpupool-tools-cpumask.patch
Description: Text Data

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel