WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-ppc-devel

Re: [XenPPC] performance profiling current and future steps

To: Christian Ehrhardt <ehrhardt@xxxxxxxxxxxxxxxxxx>
Subject: Re: [XenPPC] performance profiling current and future steps
From: Jimi Xenidis <jimix@xxxxxxxxxxxxxx>
Date: Thu, 22 Mar 2007 12:08:07 -0400
Cc: xen-ppc-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Thu, 22 Mar 2007 09:06:56 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <46002357.4080200@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-ppc-devel-request@lists.xensource.com?subject=help>
List-id: Xen PPC development <xen-ppc-devel.lists.xensource.com>
List-post: <mailto:xen-ppc-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ppc-devel>, <mailto:xen-ppc-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-ppc-devel>, <mailto:xen-ppc-devel-request@lists.xensource.com?subject=unsubscribe>
References: <46002357.4080200@xxxxxxxxxxxxxxxxxx>
Sender: xen-ppc-devel-bounces@xxxxxxxxxxxxxxxxxxx
some comments

On Mar 20, 2007, at 2:09 PM, Christian Ehrhardt wrote:

Hi,
this mail consists of two parts. Part I tries to summarize all the performance profiling related discussions of the past few weeks in a short item listing that is now on my todo list ;) In Part II I want to encourage everyone to discuss about the following steps which are profiling xen and passive domains. I think we should discuss and shape the ideas for these steps while part I is developed the next weeks.

Part I
== profiling xenppc - implementation subparts ==
completely independent domains - context_switch:
-load/save MMCR0, PMC's, MMCRA in context_switch()
->we have to load/save these if prev OR next do measuring so we have to save/rest all in
 the end after one Dom starts to measure
->do it dependent on a variable, don't save/rest them always (slows down context switch)

Maybe not restore, but at a minimum turn off, and to be "correct" zero them.


Ensure MMCR0[FCH] for this first step:
-(ensure) set MMCR[FCH] always in xen when entering xen space. This should prevent a domain
 messing up MMCR0[FCH]
->EXCEPTION_HEAD in exception.S set MMCR0[FCH] always

You need at least the following instructions:
    mfspr r0, SPRN_MMCR0
    ori r0, r0, 1 /* MMRC0_FCH */
    mtspr 795, SPRN_MMCR0

Unfortuantely, there is not enough room in EXCEPTION_HEAD for that and you will get:

  exceptions.S: Assembler messages:
  exceptions.S:246: Error: attempt to .org/.space backwards? (-4)
  exceptions.S:253: Error: attempt to .org/.space backwards? (-4)
  exceptions.S:260: Error: attempt to .org/.space backwards? (-4)
  exceptions.S:267: Error: attempt to .org/.space backwards? (-4)
  exceptions.o: Bad value
  exceptions.S:626: FATAL: Can't write exceptions.o: Bad value


I suggest we start a new macro PMU_SAVE_STATE(save,scratch), which does the above (for now, using only scratch) and sprinkling it in all the code that EXCEPTION_HEAD branches to.




Inform xen about profiling:
->Hypercall use in setup of oprofile like Jimi suggested
->Store initial MMCR0/1, PMC, MMCRA to restore this one on LAST Hypercall
 that say "end profiling" (refcount like)

I think off and zeroing is fine, so sense in loading a bunch of zeros.

->Use this initial set for non-profiling domains too if they have not yet
 stored their set on a context_switch to a profiling one

IRQ Setup in Linux
->each Guest already set's up it's pmc_irq handler head32.S/ head64.S as it is done now behind its address tranlation and therefor handles its "own" perf- irq's.

Correct, so we need our own ppc_md.enable_pmcs = xen_enable_pmcs which should be a straight copy of pseries_lpar_enable_pmcs().

->The pmc_irq does not affect MSR[HV] so the irq is handled by the right linux guest
->xen sets up no handler for its own address space (first step)

MMCR0[FCM1] and MSR[PMM] usage:
-The linux implementation uses MMCR0[FCM1]=1 to sample only on PMM=0. no change here in the first step because we set MMCR[FCH] anyway and save/rest everything on context switch

Even if this step 1 is not yet about profiling xen itself it might help us in xenppc to: a) understand how the virtualization changes the runtime behavior of a guest in our case b) to profile hotspots in new components not knwon to non-virt linux e.g. *front drivers


Part II
== Thoughts about the way to step 2 - profiling xen ==
->the hypercall could now additionally pass a function pointer for a function in linux
 that handles xen perf interrupts

hmm, we generally do not have that ability to specify a function that Xen run in a domain, we could but I'm not sure it makes sense. I would think that we would use event channels exclusively for this and the domain activity would be wired to an IRQ.

Is that not how it works in the other architectures?
Please expand on this "function pointer"

->Only one Domain can set this up. Xen then sets up an own handler for 0xf00 pmc_irq and passes the sampled data via a shared buffer/virtual irq to the domain
 (this part would be similar to xeonoprof)
->in this case MMCR0[FCH] is set to zero in xen space as long as the profiling takes place
 in exception.S

Correct, This means that before we set MMCR0[FCH] we will need to save/restore guest/xen state. this would also mean that the context S&R code would have to use this memory area for guest switches if we a profiling Xen.

We need to also make sure that this path is similar to 0x500 path in that we do not allow MSR[EE] to be set and we simply sample and hrfid. make sure you add a BUG() that makes sure that MSR[HV] was set before the interrupt occurred.

->the handling of the sampled xen data in the "main"-domain could be very similar to the xenoprof approach which also passes xen samples to the primary sampling domain. In this way we should be able to reuse a lot of code there.
->additionally our samples contain a clear flag if it was sampled in
hypervisor in MMCRA[SAMPHV]. This should allow us an early code unification
 without a lot of "magic"
-If this would work we would be able to profile each domain completely
independent to each other because each would have it's own saved/ restored perf counters. As example - in the max stage of expansion this would enable our solution to
e.g. profile one domain per cycles and another per L2 misses.
The xen samples would be managed by a primary domain e.g. the first one that demands it via
the hypercall - the later ones get an ebusy or something like that

== Thoughts about step 2b - profiling passive domains ==
->because the solution to profile domains is so similar to the plain linux oprofile approach it could be possible to enable the "normal" performance monitor usage in that domain as long as another domain e.g. Admin on dom0 tell xen that there will be some profiling and it has to save/rest the performance
 SPR's. This is not fully passive but at least a solution for non
 virtualization aware domains whatever these might be in xenppc ;)
-To really discuss about that step step 1 has to shape up its final
implementation so we know it work the way we currently think

--

Grüsse / regards, Christian Ehrhardt

IBM Linux Technology Center, Open Virtualization
+49 7031/16-3385
Ehrhardt@xxxxxxxxxxxxxxxxxxx
Ehrhardt@xxxxxxxxxx

IBM Deutschland Entwicklung GmbH
Vorsitzender des Aufsichtsrats: Johann Weihen Geschäftsführung: Herbert Kircher Sitz der Gesellschaft: Böblingen
Registergericht: Amtsgericht Stuttgart, HRB 243294


_______________________________________________
Xen-ppc-devel mailing list
Xen-ppc-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ppc-devel


_______________________________________________
Xen-ppc-devel mailing list
Xen-ppc-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ppc-devel