[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] LTTng Xen port

* Ian Pratt (m+Ian.Pratt@xxxxxxxxxxxx) wrote:
> > I have read your tracing thread and I am surprised to see how much
> things
> > you would like in a tracer are already implemented and tested in
> LTTng. I
> > am currently porting my tracer to Xen, so I think it might be useful
> for you
> > to know what it provides. My goal is to do not duplicate the effort
> and save
> > everyone some time.
> I like the work you've done with LTTng, but we have to be careful not to
> go too overboard with how fancy we make the solution for Xen.  We don't
> particularly need dynamic registering of trace types (though dynamically
> turn off-and-on-able is good), and I'd like to keep as much complexity
> compile time as possible. Having per CPU buffers using the TSC as the
> timestamp is perfectly adequate (provided we drop in an appropriate
> synchronization record whenever the Xen TSC/wall clock calibration code
> runs on each CPU).

Hi Ian,

The good thing about being flexible is that we can easily trim down the
unneeded features like dynamic registering of trace types if you don't like
them (they are implemented in the ltt-facilities.c module which could be hacked
to take a statically known set of facilities). Some features like a small
control channel which records information about facilities, event types, arch
type sizes and endianness is still interesting even without dynamically loadable
facilities though, as we can expect the developers to extend the set of events
to add their own and it provides portability.

> > - Polling for data in Xen from a dom0 process.
> >   Xentrace currently polls the hypervisor each 100ms to see if there
> is
> > data
> >   that needs to be consumed. Instead of an active polling, it would be
> nice
> > to
> >   use the dom0 OS capability to put a process to sleep while waiting
> for a
> >   resource. It would imply creating a module, loaded in dom0, that
> would
> > wait
> >   for a specific virq coming from the Hypervisor to wake up such
> processes.
> >   We could think of exporting a complete poll() interface through
> sysfs or
> >   procfs that would be a directory filled with the resources exported
> from
> > the
> >   Hypervisor to dom0 (which could include wait for resource freed,
> useful
> > when
> >   shutting down a domU instead of busy looping). It would help dom0 to
> > schedule
> >   other processes while a process is waiting for the Hypervisor.
> I really thought we already had the functionality to enable the trac
> ewriter to block on the trace buffer(s) becoming half full -- I thought
> Rob Gardener fixed this ages ago. He certainly *promised* a patch to do
> it :)

As from the current mercurial tree :

int monitor_tbufs(FILE *logfile)
310     /* now, scan buffers for events */
311     while ( !interrupted )
312     {
313         for ( i = 0; (i < num) && !interrupted; i++ )
314         {
315             while ( meta[i]->cons != meta[i]->prod )
316             {
317                 rmb(); /* read prod, then read item. */
318                 write_rec(i, data[i] + meta[i]->cons % size_in_recs, 
319                 mb(); /* read item, then update cons. */
320                 meta[i]->cons++;
321             }
322         }
324         nanosleep(&opts.poll_sleep, NULL);
325     }

So it seems like the implementation must be hiding either in someone's head or
in his mercurial tree. :)

I guess it would be easiter to implement if there was support for the dom0 OS
to block a process waiting for an Hypervisor resource. Or maybe is it already
implemented but eluded my attention ?



OpenPGP public key:              http://krystal.dyndns.org:8080/key/compudj.gpg
Key fingerprint:     8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68 

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.