WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ppc-devel

[XenPPC] performance profiling current and future steps

To: xen-ppc-devel@xxxxxxxxxxxxxxxxxxx
Subject: [XenPPC] performance profiling current and future steps
From: Christian Ehrhardt <ehrhardt@xxxxxxxxxxxxxxxxxx>
Date: Tue, 20 Mar 2007 19:09:27 +0100
Delivery-date: Tue, 20 Mar 2007 11:08:17 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-ppc-devel-request@lists.xensource.com?subject=help>
List-id: Xen PPC development <xen-ppc-devel.lists.xensource.com>
List-post: <mailto:xen-ppc-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ppc-devel>, <mailto:xen-ppc-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ppc-devel>, <mailto:xen-ppc-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-ppc-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 1.5.0.10 (X11/20070301)
Hi,
this mail consists of two parts. Part I tries to summarize all the performance profiling related discussions of the past few weeks in a short item listing that is now on my todo list ;) In Part II I want to encourage everyone to discuss about the following steps which are profiling xen and passive domains. I think we should discuss and shape the ideas for these steps while part I is developed the next weeks.

Part I
== profiling xenppc - implementation subparts ==
completely independent domains - context_switch:
-load/save MMCR0, PMC's, MMCRA in context_switch()
->we have to load/save these if prev OR next do measuring so we have to save/rest all in
 the end after one Dom starts to measure
->do it dependent on a variable, don't save/rest them always (slows down context switch)

Ensure MMCR0[FCH] for this first step:
-(ensure) set MMCR[FCH] always in xen when entering xen space. This should prevent a domain
 messing up MMCR0[FCH]
->EXCEPTION_HEAD in exception.S set MMCR0[FCH] always

Inform xen about profiling:
->Hypercall use in setup of oprofile like Jimi suggested
->Store initial MMCR0/1, PMC, MMCRA to restore this one on LAST Hypercall
 that say "end profiling" (refcount like)
->Use this initial set for non-profiling domains too if they have not yet
 stored their set on a context_switch to a profiling one

IRQ Setup in Linux
->each Guest already set's up it's pmc_irq handler head32.S/head64.S as it is done now
 behind its address tranlation and therefor handles its "own" perf-irq's.
->The pmc_irq does not affect MSR[HV] so the irq is handled by the right linux guest
->xen sets up no handler for its own address space (first step)

MMCR0[FCM1] and MSR[PMM] usage:
-The linux implementation uses MMCR0[FCM1]=1 to sample only on PMM=0. no change here in the first step because we set MMCR[FCH] anyway and save/rest everything on context switch

Even if this step 1 is not yet about profiling xen itself it might help us in xenppc to: a) understand how the virtualization changes the runtime behavior of a guest in our case b) to profile hotspots in new components not knwon to non-virt linux e.g. *front drivers


Part II
== Thoughts about the way to step 2 - profiling xen ==
->the hypercall could now additionally pass a function pointer for a function in linux
 that handles xen perf interrupts
->Only one Domain can set this up. Xen then sets up an own handler for 0xf00
pmc_irq and passes the sampled data via a shared buffer/virtual irq to the domain
 (this part would be similar to xeonoprof)
->in this case MMCR0[FCH] is set to zero in xen space as long as the profiling takes place
 in exception.S
->the handling of the sampled xen data in the "main"-domain could be very
similar to the xenoprof approach which also passes xen samples to the primary sampling domain. In this way we should be able to reuse a lot of code there.
->additionally our samples contain a clear flag if it was sampled in
hypervisor in MMCRA[SAMPHV]. This should allow us an early code unification
 without a lot of "magic"
-If this would work we would be able to profile each domain completely
independent to each other because each would have it's own saved/restored perf counters. As example - in the max stage of expansion this would enable our solution to
e.g. profile one domain per cycles and another per L2 misses.
The xen samples would be managed by a primary domain e.g. the first one that demands it via
the hypercall - the later ones get an ebusy or something like that

== Thoughts about step 2b - profiling passive domains ==
->because the solution to profile domains is so similar to the plain linux
 oprofile approach it could be possible to enable the "normal" performance
monitor usage in that domain as long as another domain e.g. Admin on dom0 tell xen that there will be some profiling and it has to save/rest the performance
 SPR's. This is not fully passive but at least a solution for non
 virtualization aware domains whatever these might be in xenppc ;)
-To really discuss about that step step 1 has to shape up its final
implementation so we know it work the way we currently think

--

Grüsse / regards, Christian Ehrhardt

IBM Linux Technology Center, Open Virtualization
+49 7031/16-3385
Ehrhardt@xxxxxxxxxxxxxxxxxxx
Ehrhardt@xxxxxxxxxx

IBM Deutschland Entwicklung GmbH
Vorsitzender des Aufsichtsrats: Johann Weihen Geschäftsführung: Herbert Kircher Sitz der Gesellschaft: Böblingen
Registergericht: Amtsgericht Stuttgart, HRB 243294


_______________________________________________
Xen-ppc-devel mailing list
Xen-ppc-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ppc-devel