WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH 3/5] x86/pvclock: add vsyscall implementation

To: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>
Subject: [Xen-devel] Re: [PATCH 3/5] x86/pvclock: add vsyscall implementation
From: Avi Kivity <avi@xxxxxxxxxx>
Date: Tue, 06 Oct 2009 17:11:22 +0200
Cc: Jeremy Fitzhardinge <jeremy.fitzhardinge@xxxxxxxxxx>, kurt.hackel@xxxxxxxxxx, the arch/x86 maintainers <x86@xxxxxxxxxx>, Linux Kernel Mailing List <linux-kernel@xxxxxxxxxxxxxxx>, Glauber de Oliveira Costa <gcosta@xxxxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, zach.brown@xxxxxxxxxx, chris.mason@xxxxxxxxxx
Delivery-date: Tue, 06 Oct 2009 08:13:35 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <711a958d-5a76-4f00-aa69-8e5889945992@default>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <711a958d-5a76-4f00-aa69-8e5889945992@default>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.1) Gecko/20090814 Fedora/3.0-2.6.b3.fc11 Thunderbird/3.0b3
On 10/06/2009 04:19 PM, Dan Magenheimer wrote:
From: Jeremy Fitzhardinge [mailto:jeremy.fitzhardinge@xxxxxxxxxx]
With this in place, I can do a gettimeofday in about 100ns on a 2.4GHz
Q6600.  I'm sure this could be tuned a bit more, but it is
already much better than a syscall.
To evaluate the goodness of this, we really need a full
set of measurements for:

a) cost of rdtsc (and rdtscp if different)
b) cost of vsyscall+pvclock
c) cost of rdtsc emulated
d) cost of a hypercall that returns "hypervisor system time"

On a E6850 (3Ghz but let's use cycles), I measured;

a == 72 cycles
c == 1080 cycles
d == 780 cycles

It may be partly apples and oranges, but it looks
like a good guess for b on my machine is

b == 240 cycles

Two rdtscps should suffice (and I think they are much faster on modern machines).

Not bad, but is there any additional context switch
cost to support it?

rdtscp requires an additional msr read/write on heavyweight host context switches. Should be negligible compared to the savings.

From: Avi Kivity [mailto:avi@xxxxxxxxxx]
Instead of using vgetcpu() and rdtsc() independently, you can
use rdtscp
to read both atomically.  This removes the need for the
preempt notifier.
Xen does not currently expose rdtscp and so does not emulate
(or context switch) TSC_AUX.  Context switching TSC_AUX
is certainly possible, but will likely be expensive.
If the primary reason for vsyscall+pvclock is to maximize
performance for gettimeofday/clock_gettime, this cost
would need to be added to the mix.

It will cost ~100 cycles on heavyweight host context switch (guest-to-guest).

preempt notifiers are per-thread, not global, and will upset
the cycle
counters.  I'd drop them and use rdtscp instead (and give up if the
processor doesn't support it).
Even if rdtscp is used, in the Intel processor lineup
only the very latest (Nehalem) supports rdtscp, so
"give up" doesn't seem like a very good option, at least
in the near future.

Why not? we still fall back to the guest kernel. By the time guest kernels with rdtscp support are in the field, these machines will be quiet old.

--
error compiling committee.c: too many arguments to function


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>