[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH] Export Multicore information

Hi Emmanuel,
   My comments bellow.

Thanks & Regards,
Open Source Technology Center, Intel Corporation.
The mind is like a parachute; it works much better when it's open.

>-----Original Message-----
>From: Emmanuel Ackaouy [mailto:ack@xxxxxxxxxxxxx]
>Sent: Tuesday, December 12, 2006 12:02 PM

>In terms of topology, aren't the sibling and core maps enough
>information? I guess we could plan ahead for nodes in the
>interface too. Do we really need to export cpu, core, and
>package ids though? Also, should we export info for anything
>other than online cpus?

Yes, the thread_sibling_map & core_siblings_map duplicate the
information provided by thread_id, core_id & package_id. And we can get
rid of one of these sets. That is the reason I had it as the extra
information with -x command line switch. In my opinion keeping the
thread_id, core_id & package_id is better because then we don't have to
worry about bitmap length in the future. Keir also suggested that
comment 1st. 
   I can easily take out the sibling_maps from the implementation. 
 We do not have ability to offline a processor from hypervisor, do we?
If we have cpu hotplug capability in hypervisor then this patch would
need a bit of rework. Until then using either cpu_online_map or
cpu_possible_map is not much different. And we can stick to the
cpu_online_map, if it makes the code look clean.

>I've only looked at the code to support XEN_SYSCTL_cpuinfo in
>sysctl.c. I don't get the "count" check here. Looks like we
>ignore the size of the array passed by the user. We read it
>but ignore it. Am I missing something?

I think I should have added some documentation comments in the code
there mentioning the usage of count. It would make more sense if you
look at the code which is calling this interface in the xc.c. There are
2 conventions for calling this sysctl function.

1. With count = 0, and the array_list = NULL
        This will fill the count variable with no of entries this call
can fill with the 2nd convention.
  if ((pi->count == 0) && guest_handle_is_null(pi->cpuinfo_list)) {
    pi->count = num_possible_cpus();
    if ( copy_to_guest(u_sysctl, sysctl, 1) ) {
      printk("sysctl XEN_SYSCTL_cpuinfo: copy to guest (sysctl) failed
      ret = -EFAULT;
    break; /* this inside the switch statement is similar to return */

2. Here user space allocates the space for array_list of count size, and
passes it to the sysctl.

In the user level code from tools/python/xen/lowlevel/xc/xc.c the
xc_cpuinfo() is call to the sysctl.

   /* 1st get the count of items in the list */
    info.count = 0;
    set_xen_guest_handle(info.cpuinfo_list, NULL);
    if ( (ret = xc_cpuinfo(self->xc_handle, &info)) != 0 ) {
        errno = ret;
        return PyErr_SetFromErrno(xc_error);

    if ( info.count <= 0 ) {
        return PyErr_SetFromErrno(xc_error);

    /* now allocate space for all items and get details of all */
    list = PyList_New(info.count);
    if (list == NULL) {
        return PyErr_NoMemory();

    cpuinfo_list = PyMem_New(xc_cpuinfo_list_t, info.count);
    if (cpuinfo_list == NULL) {
        return PyErr_NoMemory();

    set_xen_guest_handle(info.cpuinfo_list, cpuinfo_list);
    if ( xc_cpuinfo(self->xc_handle, &info) != 0 ) {
        return PyErr_SetFromErrno(xc_error);


Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.