WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: [Patch] update cpumask handling for cpu pools in lib

To: Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Re: [Patch] update cpumask handling for cpu pools in libxc and python
From: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
Date: Fri, 17 Sep 2010 11:20:42 +0200
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 17 Sep 2010 02:21:44 -0700
Dkim-signature: v=1; a=rsa-sha256; c=simple/simple; d=ts.fujitsu.com; i=juergen.gross@xxxxxxxxxxxxxx; q=dns/txt; s=s1536b; t=1284715247; x=1316251247; h=message-id:date:from:mime-version:to:cc:subject: references:in-reply-to; z=Message-ID:=20<4C9332EA.3030006@xxxxxxxxxxxxxx>|Date:=20 Fri,=2017=20Sep=202010=2011:20:42=20+0200|From:=20Juergen =20Gross=20<juergen.gross@xxxxxxxxxxxxxx>|MIME-Version: =201.0|To:=20Ian=20Campbell=20<Ian.Campbell@xxxxxxxxxxxxx >|CC:=20"xen-devel@xxxxxxxxxxxxxxxxxxx"=20<xen-devel@list s.xensource.com>|Subject:=20Re:=20[Xen-devel]=20Re:=20[Pa tch]=20update=20cpumask=20handling=20for=20cpu=20pools=0D =0A=20in=20libxc=20and=20python|References:=20<4C9301DB.4 050009@xxxxxxxxxxxxxx>=20<1284714037.16095.3083.camel@zak az.uk.xensource.com>|In-Reply-To:=20<1284714037.16095.308 3.camel@xxxxxxxxxxxxxxxxxxxxxx>; bh=yM9PgfvzO5YbORLMgnTo3SH6U16Nw03EoqRimEQX+CM=; b=hcVzJA/zNQWX8ssOWdrhi17QKtVVAon4I5kzI2lScKGb9EJbrOXMvGpd CWMMCqPxD371UkbmW6T7W9f+wL9vqS6FNOLkXoa5rhG/6Aq93dmPnhN0O 20NUhommuyBkASO2cJGbpfP9p8npPGDO9zNm8CvpFr9oh90HXVOqP4Tdk TmWxNQsYuUwWaqBBwJ4Nre83r/KKtePXjkyMKecCDw86LJF4NzhZanAsR j7/LF4OjUkJvcDSrCkxyp2n17hsML;
Domainkey-signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns; h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV: Received:Received:Message-ID:Date:From:Organization: User-Agent:MIME-Version:To:CC:Subject:References: In-Reply-To:Content-Type; b=hJxKNKTgHG7grcza4P7KqLcsD5OpaSbXr1HTd5i6sXyr3cVb0FWlU1ZV ixTrD+EkSwRnNhhq18YBKdvThg5H6sqqxlQu9YAzWqWedJBZaGP03ozAJ X34oYiw1n4no2YvbK7NQgF6erNYN6FR77E1vlWqZcHpRqcFX1tDrz2XUM hyN69Qf/NlRCVS9YXnHAi88kwBK8oqDTznDTVKebHMxsN2gerPg/AXw7p 6b6KIUK8bbDPXp4xWS/wsO7K66/xc;
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1284714037.16095.3083.camel@xxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Fujitsu Technology Solutions
References: <4C9301DB.4050009@xxxxxxxxxxxxxx> <1284714037.16095.3083.camel@xxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.11) Gecko/20100805 Iceowl/1.0b1 Icedove/3.0.6
On 09/17/10 11:00, Ian Campbell wrote:
On Fri, 2010-09-17 at 06:51 +0100, Juergen Gross wrote:
Signed-off-by: juergen.gross@xxxxxxxxxxxxxx

To be able to support arbitrary numbers of physical cpus it was
necessary to
include the size of cpumaps in the xc-interfaces for cpu pools.
These were:
   definition of xc_cpupoolinfo_t
   xc_cpupool_getinfo()
   xc_cpupool_freeinfo()

diff -r d978675f3d53 tools/libxc/xc_cpupool.c
--- a/tools/libxc/xc_cpupool.c  Thu Sep 16 18:29:26 2010 +0100
+++ b/tools/libxc/xc_cpupool.c  Fri Sep 17 07:42:30 2010 +0200
@@ -34,6 +34,20 @@ static int do_sysctl_save(xc_interface *
      return ret;
  }

+static int get_cpumap_size(xc_interface *xch)
+{
+    static int cpumap_size = 0;
+    xc_physinfo_t physinfo;
+
+    if ( cpumap_size )
+        return cpumap_size;
+
+    if ( !xc_physinfo(xch,&physinfo) )
+        cpumap_size = (physinfo.max_cpu_id + 8) / 8;

The + 8 / 8 thing still jumps out at me every time I look at it. Perhaps
the intention would be clearer if written as:
        int nr_cpus = physinfo.max_cpu_id + 1;
        cpumap_size = (nr+cpus + 7) / 8;

I modified it to make it more clear.

If xc_physinfo fails (which seems like a reasonably fatal type of error)
then this function returns 0 but the caller does not seem to check for
this case.

Okay, test added.


+xc_cpupoolinfo_t *xc_cpupool_getinfo(xc_interface *xch,
+                       uint32_t poolid)
  {
[...]
+    local_size = get_cpumap_size(xch);
+    cpumap_size = (local_size + 7) / 8;

local_size has already been rounded up in get_cpumap_size. Do we really
need to do it again?

I've made it more clear that this is rounding to uint64.


+    size = sizeof(xc_cpupoolinfo_t) + cpumap_size * 8 + local_size;

Why do we need both "cpumap_size * 8" and local_size additional bytes
here? Both contain the number of bytes necessary to contain a cpumap
bitmask and in fact I suspect they are both equal at this point (see
point about rounding above).

The hypervisor returns a cpumask based on bytes, the tools use uint64-based
cpumasks. In practice this is equivalent as long as multiple of 8 bytes are
written by the hypervisor and the system is running little endian.
I prefer a clean interface mapping here.


Juergen

--
Juergen Gross                 Principal Developer Operating Systems
TSP ES&S SWE OS6                       Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions              e-mail: juergen.gross@xxxxxxxxxxxxxx
Domagkstr. 28                           Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

Attachment: cpupool-tools-cpumask.patch
Description: Text Data

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel