WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [Patch 1/2] support of cpupools in xl: update cpumask ha

To: Ian Campbell <Ian.Campbell@xxxxxxxxxx>
Subject: Re: [Xen-devel] [Patch 1/2] support of cpupools in xl: update cpumask handling for cpu pools in libxc and python
From: Juergen Gross <juergen.gross@xxxxxxxxxxxxxx>
Date: Tue, 05 Oct 2010 15:50:06 +0200
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 05 Oct 2010 06:56:30 -0700
Dkim-signature: v=1; a=rsa-sha256; c=simple/simple; d=ts.fujitsu.com; i=juergen.gross@xxxxxxxxxxxxxx; q=dns/txt; s=s1536b; t=1286286626; x=1317822626; h=message-id:date:from:mime-version:to:cc:subject: references:in-reply-to:content-transfer-encoding; z=Message-ID:=20<4CAB2D0E.5070907@xxxxxxxxxxxxxx>|Date:=20 Tue,=2005=20Oct=202010=2015:50:06=20+0200|From:=20Juergen =20Gross=20<juergen.gross@xxxxxxxxxxxxxx>|MIME-Version: =201.0|To:=20Ian=20Campbell=20<Ian.Campbell@xxxxxxxxxx> |CC:=20"xen-devel@xxxxxxxxxxxxxxxxxxx"=20<xen-devel@lists .xensource.com>|Subject:=20Re:=20[Xen-devel]=20[Patch=201 /2]=20support=20of=20cpupools=20in=20xl:=20update=09cpuma sk=0D=0A=20handling=20for=20cpu=20pools=20in=20libxc=20an d=20python|References:=20<4CA58BB2.60709@xxxxxxxxxxxxxx> =20<1285925748.16095.46858.camel@xxxxxxxxxxxxxxxxxxxxxx> |In-Reply-To:=20<1285925748.16095.46858.camel@xxxxxxxxxxx nsource.com>|Content-Transfer-Encoding:=207bit; bh=2I6L079u0vhnj/Wip5NxaeTOgNVOH5PSc3VgZIjLvok=; b=vFhKi//q+4xOl4z1fM8ZJxKqtjgilIPj8zofDyqTdvKjvi66/HpWioLM 9f6V/URVv7bumRrM93FXuQO2trhsml+PfL8su5joyHalusxWKiT6PiHX/ yq6jjvHmh/PH7NTuFdzAoGXH8gwZnbsx4A2kxjOrRCbx6BVL7KaLDBxGI sgUrb7k83L90xsd8Go//d/buPl0nCbOxro1v9mTgDD6I0dnXYyzhdiJui Ge7FJmHEYlOd+HxUUHPIDo2DutP3p;
Domainkey-signature: s=s1536a; d=ts.fujitsu.com; c=nofws; q=dns; h=X-SBRSScore:X-IronPort-AV:Received:X-IronPort-AV: Received:Received:Message-ID:Date:From:Organization: User-Agent:MIME-Version:To:CC:Subject:References: In-Reply-To:Content-Type:Content-Transfer-Encoding; b=BQ9KQcQuATgMkxRX74ijJVwCCpWz7KmVQsQ4RD+iipbY8fBf/hsGrDXf k9VD4+9bcKcXzCY9xJcEMvbZ/kFE3bH0USxvFRmxd7dNaRk/TLVa9IVcI q4MvQpdgIZiniHN4I97Lyrx79erZVEFU2XsyWU2YQ1pg6RuNeDdTWb/ns hzdZ8ONJxWrHfZONA9wA3uGOOD3XbzN60akGZWJZzRmEHgNn0wHhExMYN SauczrjLK/tSHGaui+s9z2X8Y4CT2;
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <1285925748.16095.46858.camel@xxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Fujitsu Technology Solutions
References: <4CA58BB2.60709@xxxxxxxxxxxxxx> <1285925748.16095.46858.camel@xxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.12) Gecko/20100913 Iceowl/1.0b1 Icedove/3.0.7
On 10/01/10 11:35, Ian Campbell wrote:
(I highly recommend the patchbomb extension ("hg email"), it makes
sending series much simpler and threads the mails together in a
convenient way)

Okay, I'll try hg email for the next round...


On Fri, 2010-10-01 at 08:20 +0100, Juergen Gross wrote:
Signed-off-by: juergen.gross@xxxxxxxxxxxxxx

This should come after the description.

Okay.


To be able to support arbitrary numbers of physical cpus it was necessary to
include the size of cpumaps in the xc-interfaces for cpu pools.
These were:
   definition of xc_cpupoolinfo_t
   xc_cpupool_getinfo()
   xc_cpupool_freeinfo()

Please also mention the change in xc_cpupool_getinfo semantics from
caller allocated buffer to callee allocated+returned.

Okay.


@@ -64,50 +78,61 @@ int xc_cpupool_destroy(xc_interface *xch
      return do_sysctl_save(xch,&sysctl);
  }

-int xc_cpupool_getinfo(xc_interface *xch,
-                       uint32_t first_poolid,
-                       uint32_t n_max,
-                       xc_cpupoolinfo_t *info)
+xc_cpupoolinfo_t *xc_cpupool_getinfo(xc_interface *xch,
+                       uint32_t poolid)
[...]
-    memset(info, 0, n_max * sizeof(xc_cpupoolinfo_t));
+    local_size = get_cpumap_size(xch);
+    local = alloca(local_size);
+    if (!local_size)
+    {
+        PERROR("Could not get number of cpus");
+        return NULL;
+    }

I imagine alloca(0) is most likely safe so long as you don't actually
use the result, but the man page doesn't specifically say. Probably the
check of !local_size should be before the alloca(local_size) to be on
the safe side.

Yeah, you are right.


+    cpumap_size = (local_size + sizeof(*info->cpumap) - 1) / 
sizeof(*info->cpumap);

xg_private.h defines a macro ROUNDUP, I wonder if that should be moved
somewhere more generic and used to clarify code like this?

Doesn't fit really here (needs the number of bits, e.g. 6 in this case).
As this code will vanish with cpumask rework to be byte-based, I don't
change this now.


diff -r 8b7d253f0e17 tools/libxc/xenctrl.h
--- a/tools/libxc/xenctrl.h     Fri Oct 01 08:39:49 2010 +0100
+++ b/tools/libxc/xenctrl.h     Fri Oct 01 09:13:36 2010 +0100
@@ -564,15 +565,11 @@ int xc_cpupool_destroy(xc_interface *xch
[...]
+ * @parm poolid lowest id for which info is returned
+ * return cpupool info ptr (obtained by malloc)
   */
-int xc_cpupool_getinfo(xc_interface *xch,
-                       uint32_t first_poolid,
-                       uint32_t n_max,
-                       xc_cpupoolinfo_t *info);
+xc_cpupoolinfo_t *xc_cpupool_getinfo(xc_interface *xch,
+                       uint32_t poolid);

  /**
   * Add cpu to a cpupool. cpu may be -1 indicating the first unassigned.
@@ -615,10 +612,12 @@ int xc_cpupool_movedomain(xc_interface *
   *
   * @parm xc_handle a handle to an open hypervisor interface
   * @parm cpumap pointer where to store the cpumap
+ * @parm cpusize size of cpumap array in bytes
   * return 0 on success, -1 on failure
   */
  int xc_cpupool_freeinfo(xc_interface *xch,
-                        uint64_t *cpumap);
+                        uint64_t *cpumap,
+                        int cpusize);

xc_cpupool_getinfo returns a callee allocated buffer and
xc_cpupool_freeinfo expects to be given a caller allocated buffer? I
think we should make this consistent one way of the other.

Agreed.


diff -r 71f836615ea2 tools/libxl/libxl.c
--- a/tools/libxl/libxl.c       Fri Sep 24 15:54:39 2010 +0100
+++ b/tools/libxl/libxl.c       Fri Oct 01 09:03:17 2010 +0200
@@ -610,26 +610,34 @@ libxl_poolinfo * libxl_list_pool(libxl_c
  libxl_poolinfo * libxl_list_pool(libxl_ctx *ctx, int *nb_pool)
  {
      libxl_poolinfo *ptr;
-    int i, ret;
-    xc_cpupoolinfo_t info[256];
-    int size = 256;
+    int i;
+    xc_cpupoolinfo_t *info;
+    uint32_t poolid;
+    libxl_physinfo physinfo;

-    ptr = calloc(size, sizeof(libxl_poolinfo));
-    if (!ptr) {
-        LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "allocating cpupool info");
+    if (libxl_get_physinfo(ctx,&physinfo) != 0) {
+        LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "getting phys info");
          return NULL;
      }

Am I missing where the contents of physinfo is subsequently used in this
function?

Me too :-)
Should be in the second patch.



-    ret = xc_cpupool_getinfo(ctx->xch, 0, 256, info);
-    if (ret<0) {
-        LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "getting cpupool info");
-        return NULL;
+    ptr = NULL;
+
+    poolid = 0;
+    for (i = 0;; i++) {
+        info = xc_cpupool_getinfo(ctx->xch, poolid);
+        if (info == NULL)
+            break;
+        ptr = realloc(ptr, (i + 1) * sizeof(libxl_poolinfo));
+        if (!ptr) {
+            LIBXL__LOG_ERRNO(ctx, LIBXL__LOG_ERROR, "allocating cpupool info");
+            return NULL;
+        }

This will leak the previous value of ptr if realloc() fails. You need to
do:
        tmp = realloc(ptr, ....)
        if (!tmp) {
                free(ptr);
                LIBXL__LOG_ERRNO(...);
                return NULL;
        }
        ptr = tmp;



Should be changed in other places, too:
libxc/xc_tmem.c
libxl/libxl.c (sometimes not even checked for error)

diff -r 71f836615ea2 tools/python/xen/lowlevel/xc/xc.c
--- a/tools/python/xen/lowlevel/xc/xc.c Fri Sep 24 15:54:39 2010 +0100
+++ b/tools/python/xen/lowlevel/xc/xc.c Fri Oct 01 09:03:17 2010 +0200
@@ -241,7 +241,7 @@ static PyObject *pyxc_vcpu_setaffinity(X
      if ( xc_physinfo(self->xc_handle,&info) != 0 )
          return pyxc_error_to_exception(self->xc_handle);

-    nr_cpus = info.nr_cpus;
+    nr_cpus = info.max_cpu_id + 1;

      size = (nr_cpus + cpumap_size * 8 - 1)/ (cpumap_size * 8);
      cpumap = malloc(cpumap_size * size);

Is this (and the equivalent in getinfo) an independent bug fix for a
pre-existing issue or does it somehow relate to the rest of the changes?
I don't see any corresponding change to xc_vcpu_setaffinity is all.

It's an independent fix. OTOH it's cpumask related, so I put it in...
xc_vcpu_setaffinity is not touched as it takes the cpumask size as
parameter.


Given the repeated uses of physinfo.max_cpu_id (here, in get_cpumap_size
etc) might a xc_get_nr_cpus() function be worthwhile?

Okay. I'll call it xc_get_max_cpus().


Presumably when you change these interfaces to use uint8_t instead of
uint64_t this code becomes the same as the private get_cpumap_size you
defined earlier so it might be worth exporting that functionality from
libxc?

I'll merge these functions later.


@@ -1966,7 +1967,7 @@ static PyObject *pyxc_cpupool_getinfo(Xc
      PyObject *list, *info_dict;

      uint32_t first_pool = 0;
-    int max_pools = 1024, nr_pools, i;
+    int max_pools = 1024, i;
[...]
+    for (i = 0; i<  max_pools; i++)

I don't think there is any 1024 pool limit inherent in the new libxc
code, is there? You've removed the limit from libxl and I think the
right thing to do is remove it here as well.

Correct.


      {
-        free(info);
-        return pyxc_error_to_exception(self->xc_handle);
-    }
-
-    list = PyList_New(nr_pools);
-    for ( i = 0 ; i<  nr_pools; i++ )
-    {
+        info = xc_cpupool_getinfo(self->xc_handle, first_pool);
+        if (info == NULL)
+            break;
          info_dict = Py_BuildValue(
              "{s:i,s:i,s:i,s:N}",
-            "cpupool",         (int)info[i].cpupool_id,
-            "sched",           info[i].sched_id,
-            "n_dom",           info[i].n_dom,
-            "cpulist",         cpumap_to_cpulist(info[i].cpumap));
+            "cpupool",         (int)info->cpupool_id,
+            "sched",           info->sched_id,
+            "n_dom",           info->n_dom,
+            "cpulist",         cpumap_to_cpulist(info->cpumap,
+                                                 info->cpumap_size));
+        first_pool = info->cpupool_id + 1;
+        free(info);
+
          if ( info_dict == NULL )
          {
              Py_DECREF(list);
-            if ( info_dict != NULL ) { Py_DECREF(info_dict); }
-            free(info);
              return NULL;
          }
-        PyList_SetItem(list, i, info_dict);
+
+        PyList_Append(list, info_dict);
+        Py_DECREF(info_dict);
      }
-
-    free(info);

      return list;
  }
@@ -2072,12 +2066,28 @@ static PyObject *pyxc_cpupool_movedomain

  static PyObject *pyxc_cpupool_freeinfo(XcObject *self)
  {
-    uint64_t cpumap;
+    uint64_t *cpumap;
+    xc_physinfo_t physinfo;
+    int ret;
+    PyObject *info = NULL;

-    if (xc_cpupool_freeinfo(self->xc_handle,&cpumap) != 0)
+    if (xc_physinfo(self->xc_handle,&physinfo))
          return pyxc_error_to_exception(self->xc_handle);

-    return cpumap_to_cpulist(cpumap);
+    cpumap = calloc((physinfo.max_cpu_id + 64) / 64, sizeof(uint64_t));

Making xc_cpupool_freeinfo allocate the buffer, like xc_cpupool_getinfo
does would remove the need for this sort of arithmetic in the users of
libxc.

Yes.


Juergen

--
Juergen Gross                 Principal Developer Operating Systems
TSP ES&S SWE OS6                       Telephone: +49 (0) 89 3222 2967
Fujitsu Technology Solutions              e-mail: juergen.gross@xxxxxxxxxxxxxx
Domagkstr. 28                           Internet: ts.fujitsu.com
D-80807 Muenchen                 Company details: ts.fujitsu.com/imprint.html

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel