WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: [Patch] update cpumask handling for cpu pools in lib

To: Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Re: [Patch] update cpumask handling for cpu pools in libxc and python
From: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
Date: Fri, 17 Sep 2010 12:04:11 +0200
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 17 Sep 2010 03:05:22 -0700
Dkim-signature: v=1; a=rsa-sha256; c=simple/simple; d=ts.fujitsu.com; i=juergen.gross@xxxxxxxxxxxxxx; q=dns/txt; s=s1536b; t=1284717867; x=1316253867; h=message-id:date:from:mime-version:to:cc:subject: references:in-reply-to:content-transfer-encoding; z=Message-ID:=20<4C933D1B.3040308@xxxxxxxxxxxxxx>|Date:=20 Fri,=2017=20Sep=202010=2012:04:11=20+0200|From:=20Juergen =20Gross=20<juergen.gross@xxxxxxxxxxxxxx>|MIME-Version: =201.0|To:=20Ian=20Campbell=20<Ian.Campbell@xxxxxxxxxxxxx >|CC:=20"xen-devel@xxxxxxxxxxxxxxxxxxx"=20<xen-devel@list s.xensource.com>|Subject:=20Re:=20[Xen-devel]=20Re:=20[Pa tch]=20update=20cpumask=20handling=20for=20cpu=20pools=0D =0A=20in=20libxc=20and=20python|References:=20<4C9301DB.4 050009@xxxxxxxxxxxxxx>=09<1284714037.16095.3083.camel@zak az.uk.xensource.com>=09<4C9332EA.3030006@xxxxxxxxxxxxxx> =20<1284716674.16095.3180.camel@xxxxxxxxxxxxxxxxxxxxxx> |In-Reply-To:=20<1284716674.16095.3180.camel@xxxxxxxxxxxx source.com>|Content-Transfer-Encoding:=207bit; bh=r+/VqsVfNu9MZ/WQGb0UKSrIfLm+/zJYnHN3CGtjaI8=; b=pNL4yJb2dNMGzbFhs2j1RmSwj3ZZ0tcypi4UhoQ+elgqyr1YB0XRL5M/ od1GZAu+z5UFHynZBQLPaoANKIfmCxN27uY+07/50kDT3Y+a553pjZO4i MY37reaTVtqeqR/fvIvhjexpR0luW4KQl44BCvcM6/4TKL7wQ6cqRd3jP hWZ4quxkLyZFatwjPdnlRHWTrC8QZVKn4WvJcdUtG2PuXwDY/nAvZwsFm oEv0jEYA4XHCf5umuGsaXgLaLkP1V;
Domainkey-signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns; h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV: Received:Received:Message-ID:Date:From:Organization: User-Agent:MIME-Version:To:CC:Subject:References: In-Reply-To:Content-Type:Content-Transfer-Encoding; b=vGAZAQDTDOJW1j07EXtvYzjJ7paCe2ZRZNV+HiIE2MqX+F+EKL57GhyA qB1KG1GO/09d2GGztKc1boWpYSunKvq5dEE3aYHqxcayPsB/bqewyro0E xWQw6T2BJPsOm3d1PtT2JQYEp1PZJ7rW6SKUaC94Og7koYXhVyQJTwLwU NRmlKDe6CEEjyjK9hdXFmEUl4OOrsekgCh6HQ21dWfl+k0oORpea/7MyC KU7FvaKXRu3YWiiyJ2va3GzDjtpqt;
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1284716674.16095.3180.camel@xxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Fujitsu Technology Solutions
References: <4C9301DB.4050009@xxxxxxxxxxxxxx> <1284714037.16095.3083.camel@xxxxxxxxxxxxxxxxxxxxxx> <4C9332EA.3030006@xxxxxxxxxxxxxx> <1284716674.16095.3180.camel@xxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.11) Gecko/20100805 Iceowl/1.0b1 Icedove/3.0.6
On 09/17/10 11:44, Ian Campbell wrote:
On Fri, 2010-09-17 at 10:20 +0100, Juergen Gross wrote:
On 09/17/10 11:00, Ian Campbell wrote:
local_size has already been rounded up in get_cpumap_size. Do we really
need to do it again?

I've made it more clear that this is rounding to uint64.

Wouldn't that be "(.. + 63) / 8" then?

No, local_size is already bytes...



+    size = sizeof(xc_cpupoolinfo_t) + cpumap_size * 8 + local_size;

Why do we need both "cpumap_size * 8" and local_size additional bytes
here? Both contain the number of bytes necessary to contain a cpumap
bitmask and in fact I suspect they are both equal at this point (see
point about rounding above).

The hypervisor returns a cpumask based on bytes, the tools use uint64-based
cpumasks.

Oh, I see, as well as xc_cpupool_info_t and the cpumap which it contains
being allocated together in a single buffer you are also including a
third buffer which is for local use in this function only but which is
included in the memory region returned to the caller (who doesn't know
about it). Seems a bit odd to me, why not just allocate it locally then
free it (or use alloca)?

Actually, when I complete my hypercall buffer patch this memory will
need to be separately allocated any way since it needs to come from a
special pool. I'd prefer it if you just used explicit separate
allocation for this buffer from the start.

Okay.


  In practice this is equivalent as long as multiple of 8 bytes are
written by the hypervisor and the system is running little endian.
I prefer a clean interface mapping here.

Using a single uint64 when there was a limit of 64 cpus made sense but
now that it is variable length why not just use bytes everywhere? It
would avoid a lot of confusion about what various size units are at
various points etc. You would avoid needing to translate between the
hypervisor and tools representations too, wouldn't you?

This would suggest changing xc_vcpu_setaffinity() and

--
Juergen Gross                 Principal Developer Operating Systems
TSP ES&S SWE OS6                       Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions              e-mail: juergen.gross@xxxxxxxxxxxxxx
Domagkstr. 28                           Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel