This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] xen/arch/x86/time.c:local_time_calibration

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] xen/arch/x86/time.c:local_time_calibration
From: Mathieu Desnoyers <compudj@xxxxxxxxxxxxxxxxxx>
Date: Fri, 2 Mar 2007 19:58:51 -0500
Cc: ltt-dev@xxxxxxxxxx, mbligh@xxxxxxxxxx
Delivery-date: Fri, 02 Mar 2007 16:58:01 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.13 (2006-08-11)

First of all, please forgive questions that may already have been
answered elsewhere : I have been following the evolution of Xen

I am digging into the mechanism around get_s_time() in Xen to see
how/why it can/can't suit LTTng's tracing needs, and a few questions
arise :

From why I see, I cannot use get_s_time from an NMI handler, because it
can race with the tsc_scale update in local_time_calibration. Do you
have any plan to support this ?

local_time_calibration is called from the timer interrupt, which seems
to have the highest priority at least on x86 and x86_64. Therefore, why
are you disabling interrupts explicitely in this function ? (since you
know they are already disabled).

Do you offert any method for the Linux kernel in dom0 and domUs to read
this timer (similar interface to vsyscall in Linux ?). This can be very
useful for system-wide tracing.

I am a bit concerned about the performance impact of calling
scale_delta() at each timestamp read. Have you measured how many cycles
it takes ?

Your interpolation system between the timer interrupt and the TSC, with a
tsc_scale used to make sure there is no time jump when the master
oscillator goes slower than the local time. However, I see that there is
a forward time jump when the local time lags behind the TSC. Is there
any reason for now using a scale factor to smoothly accelerate the
frequency instead ?

Why are you interpolating between the timer interrupt and the TSC ? I
guess this is useful to support Intel SpeedStep and AMD PowerNow, but I
want to be sure.

I guess you are aware that you change the TSC's precision by doing so :
it will suffer, in the worse case, of a drift of the IRQ latency of the
system, which depends on the longest critical sections with IRQ

Since the TSCs, on the CPUs populating a physical machine, can differ
from up to the IRQ latency in the worse case, you could have 
timestamps taken exactly at the same moment differing from this amount.
Do you have some latency measurements regarding the hypervisor ?



Mathieu Desnoyers
Computer Engineering Ph.D. Student, Ecole Polytechnique de Montreal
OpenPGP key fingerprint: 8CD5 52C3 8E3C 4140 715F  BA06 3F25 A8FE 3BAE 9A68

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>