[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] Export Multicore information

Some comments:
 1. Use of cpumaps in the sysctl interface assumes no more than 64 CPUs. We got rid of that assumption everywhere else. You don’t really need the cpumaps anyway — tools can deduce them from other information (e.g., matching up core/package ids across cpus).
 2. The cacheinfo call is heinously Intel specific, especially the ‘type’ field which corresponds to a bitfield from an Intel-specific CPUID leaf. What is returned if the cacheinfo is requested on an older Intel box or on an AMD, Via, Transmeta, etc. CPU?
 3. What are these calls for? Beyond dumping a lot of info in a new xm call, who is wanting this detailed info? Particularly on the cacheinfo side I could imagine some apps wanting to know what resource they have to play with, but these sysctls are available only to dom0. And those apps will probably just go straight at CPUID anyway, and assume the cache hierarchy is symmetric (a reasonably safe assumption really).

 -- Keir

On 9/12/06 1:41 am, "Kamble, Nitin A" <nitin.a.kamble@xxxxxxxxx> wrote:

Hi Keir, Ian,
   Attached is a patch which implements “xm cpuinfo” and  “xm cacheinfo” commands. Output of these commands on a 4 way paxville (2 core per socket, 2 threads per core) and 2 way clovertown (Quadcore) system is shown bellow.
   It would be easy to extend this functionality to other architectures such as IA64 or power by reusing most of the code. Other architectures would need to implement switch-cases XEN_SYSCTL_cpuinfo: & XEN_SYSCTL_cacheinfo: in their arch_do_sysctl() in the function hyper visor to get this functionality.
   The Changes are distributed in 3 areas viz hyper visor, libxc and python code as seen in the diffstat bellow.
Please apply and/or provide comments for the patch.

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.