WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] xc_domain_getfullinfo() gone

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] xc_domain_getfullinfo() gone
From: Andrew Theurer <habanero@xxxxxxxxxx>
Date: Fri, 13 May 2005 09:40:57 -0500
Delivery-date: Fri, 13 May 2005 14:40:33 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla Thunderbird 0.8 (Windows/20040913)
I noticed this was gone from libxc. Would there be any objection to adding xc_domain_get_vcpu_info? I am interested in querying the cpu_time for each vcpu for a utility that does something like:

vm-stat

cpu[ util] domN-vcpuM[util]...domY-vcpuZ[util]
------------ --------------------------------------
cpu0[075.4] dom0-vcpu0[000.3] dom1-vcpu1[075.1]
cpu1[083.7] dom1-vcpu2[083.7]
cpu2[069.2] dom1-vcpu3[069.2]
cpu3[075.9] dom1-vcpu0[075.9]
                                                   < time interval>
cpu0[100.0] dom0-vcpu0[000.5] dom1-vcpu1[099.5]
cpu1[099.8] dom1-vcpu2[099.8]
cpu2[099.8] dom1-vcpu3[099.8]
cpu3[099.8] dom1-vcpu0[099.8]

cpu0[100.0] dom0-vcpu0[000.3] dom1-vcpu1[099.7]
cpu1[099.7] dom1-vcpu2[099.7]
cpu2[099.7] dom1-vcpu3[099.7]
cpu3[099.7] dom1-vcpu0[099.7]

cpu0[100.0] dom0-vcpu0[000.6] dom1-vcpu1[099.4]
cpu1[099.7] dom1-vcpu2[099.7]
cpu2[099.7] dom1-vcpu3[099.7]
cpu3[101.4] dom1-vcpu0[101.4]

And while we're on this subject, I wanted to track, per phys cpu, exec_domain context switches, and store this as ctx_switches in schedule_data struct. I believe tracking context switches would be a good stat to have, for example, to expose problems like high domU traffic networking on one cpu system. Any objection to this or suggestions?

Thanks,

-Andrew





_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>