Hi Keir,
Thanks for the response. My
comments bellow.
Thanks & Regards,
Nitin
Open Source Technology Center, Intel
Corporation.
-------------------------------------------------------------------------
The mind is like a parachute; it works
much better when it's open.
From: Keir Fraser
[mailto:keir@xxxxxxxxxxxxx]
Some comments:
1. Use of cpumaps in the sysctl interface assumes no more than 64 CPUs.
We got rid of that assumption everywhere else. You don’t really need the
cpumaps anyway — tools can deduce them from other information (e.g.,
matching up core/package ids across cpus).
Makes sense to me. It was kind of extra
information. I was trying to provide as much information as a user can get
running on native Linux kernel.
2. The cacheinfo call is heinously Intel specific, especially the
‘type’ field which corresponds to a bitfield from an Intel-specific
CPUID leaf. What is returned if the cacheinfo is requested on an older Intel
box or on an AMD, Via, Transmeta, etc. CPU?
Yes I agree that cacheinfo code in the
patch I sent is very Intel specific. If the same code will not run on other x86
cpus, And the xm command is not going to provide any information at all. This
is because it checks for Intel processor in the hyper visor to get this
information. The purpose of providing this information is to enable the
administrator/end user to make a better decision on assigning physical CPUs to
different VMs. And the real use this information is with the multi-core or
hyper-threaded processors. I am assuming that other people will extend this
code to support other x86 and non-x86 multi-core processors.
3. What are these calls for? Beyond dumping a lot of info in a new xm call, who is
wanting this detailed info? Particularly on the cacheinfo side I could imagine
some apps wanting to know what resource they have to play with, but these
sysctls are available only to dom0. And those apps will probably just go
straight at CPUID anyway, and assume the cache hierarchy is symmetric (a
reasonably safe assumption really).
This information is for administrator/end
user of the system to make better decisions about partitioning the processors
in the system across multiple domains. Dom0 may not be running on all the CPUs
in the system, so this gives a view from hyper visor about all the CPUs that hyper
visor sees. And also the cache info may not always be symmetric in the system.
Let me know if you have any further
concerns.
-- Keir
On 9/12/06 1:41 am, "Kamble, Nitin A"
<nitin.a.kamble@xxxxxxxxx> wrote:
Hi Keir, Ian,
Attached is a patch which implements “xm cpuinfo”
and “xm cacheinfo” commands. Output of these commands on a 4
way paxville (2 core per socket, 2 threads per core) and 2 way clovertown
(Quadcore) system is shown bellow.
It would be easy to extend this functionality to other
architectures such as IA64 or power by reusing most of the code. Other
architectures would need to implement switch-cases XEN_SYSCTL_cpuinfo: &
XEN_SYSCTL_cacheinfo: in their arch_do_sysctl() in the function hyper visor to
get this functionality.
The Changes are distributed in 3 areas viz hyper visor, libxc
and python code as seen in the diffstat bellow.
Please apply and/or provide comments for the patch.