This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] Xenoprof and the hypervisor

To: "Ray Bryant" <raybry@xxxxxxxxxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] Xenoprof and the hypervisor
From: "Santos, Jose Renato G" <joserenato.santos@xxxxxx>
Date: Wed, 24 May 2006 14:06:34 -0700
Cc: David Carr <dc@xxxxxxxxx>
Delivery-date: Wed, 24 May 2006 14:07:02 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <200605241446.12910.raybry@xxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcZ/asTsQob0ijTpQ8uyzo4pvfY4nAACRiog
Thread-topic: [Xen-devel] Xenoprof and the hypervisor

Xiaowei Yang from Intel posted a patch a few weeks ago
that enable passive domain support.
Passive domain support does exactly what you want.
It enables you to specify the fully virtualized guest as a passive
domain (using --passive_domains=<domid> option in opcontrol in dom0).
With this patch, all samples collected when the passive_domain is
running are delivered to dom0. Xiaowei also added support to decode the
samples in kernel space to the specific kernel symbols of the passive
domains (you need to pass the kernel image as an option in opcontrol).
With this you are able to get all samples in the system. The only
limitation is that oprofile will not be able to decode user level
samples for the passive domains. It will aggregate all samples into
under a single "user-level" category, but all xen and kernel samples
will be mapped to their associated symbols.

I hope this helps


>> -----Original Message-----
>> From: Ray Bryant [mailto:raybry@xxxxxxxxxxxxxxxxx] 
>> Sent: Wednesday, May 24, 2006 12:46 PM
>> To: xen-devel@xxxxxxxxxxxxxxxxxxx
>> Cc: Santos, Jose Renato G; David Carr
>> Subject: Re: [Xen-devel] Xenoprof and the hypervisor
>> On Tuesday 23 May 2006 17:13, Santos, Jose Renato G wrote:
>> > The sample is delivered to the VCPU (and associated 
>> domain) that is 
>> > currently running on the CPU when the sample is taken, as 
>> computed by 
>> > the Xen macro "current". That means that samples taken 
>> when a harware 
>> > interrupt is running are associated with the VCPU/domain that was 
>> > interrupted by the HW interrupt.
>> > Renato
>> >
>> So let's suppose what we are trying to do is to optimize or 
>> measure the 
>> performace of the hypervisor itself?   What we would like to 
>> happen in that 
>> case would be that we would to collect all of the samples 
>> that found that the hypervisor was running when the nmi 
>> occurred and ship these samples off to, say, dom0 for later 
>> collection.
>> At the moment, if a fully virtualized guest is taking most 
>> of the cycles (say 
>> > 97.5% of the cpu), and without passive domain support in xenoprof, 
>> > then
>> what happens is that 97.5% of the samples are discarded, 
>> because the fully virtualized domain is not being profiled 
>> (is_profiled(vcpu->domain) is false in xenoprof_log_event()).
>> So, I'd just like those samples handed to dom0, if possible. 
>>   I've tried to 
>> do this, but it appears oprofile then throws away the 
>> samples (needs to be looked at some more to be sure).
>> So what's the easiest way to get those samples collected and 
>> out to dom0?
>> --
>> Ray Bryant
>> AMD Performance Labs                   Austin, Tx
>> 512-602-0038 (o)                 512-507-7807 (c)

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>