WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: [PATCH 3/5] x86/pvclock: add vsyscall implementation

To: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Subject: Re: [Xen-devel] Re: [PATCH 3/5] x86/pvclock: add vsyscall implementation
From: Avi Kivity <avi@xxxxxxxxxx>
Date: Mon, 12 Oct 2009 20:29:57 +0200
Cc: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>, Jeremy Fitzhardinge <jeremy.fitzhardinge@xxxxxxxxxx>, kurt.hackel@xxxxxxxxxx, the arch/x86 maintainers <x86@xxxxxxxxxx>, Linux Kernel Mailing List <linux-kernel@xxxxxxxxxxxxxxx>, Glauber de Oliveira Costa <gcosta@xxxxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, Zach Brown <zach.brown@xxxxxxxxxx>, Chris Mason <chris.mason@xxxxxxxxxx>
Delivery-date: Mon, 12 Oct 2009 11:31:26 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4AD3738B.6050200@xxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <1254790211-15416-1-git-send-email-jeremy.fitzhardinge@xxxxxxxxxx> <1254790211-15416-4-git-send-email-jeremy.fitzhardinge@xxxxxxxxxx> <4ACB0833.2050203@xxxxxxxxxx> <4ACB9074.1000804@xxxxxxxx> <4ACC6C9C.7080707@xxxxxxxxxx> <4ACFD43E.6000506@xxxxxxxx> <4AD0CDFB.9030704@xxxxxxxxxx> <4AD3738B.6050200@xxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.1) Gecko/20090814 Fedora/3.0-2.6.b3.fc11 Thunderbird/3.0b3
On 10/12/2009 08:20 PM, Jeremy Fitzhardinge wrote:
On 10/10/09 11:10, Avi Kivity wrote:
On 10/10/2009 02:24 AM, Jeremy Fitzhardinge wrote:
On 10/07/09 03:25, Avi Kivity wrote:

def try_pvclock_vtime():
    tsc, p0 = rdtscp()
    v0 = pvclock[p0].version
    tsc, p = rdtscp()
    t = pvclock_time(pvclock[p], tsc)
    if p != p0 or pvclock[p].version != v0:
       raise Exception("Processor or timebased change under our feet")
    return t
There's a second problem:  If the time_info gets updated between the
first rdtscp and the first version fetch, then we won't have a
consistent tsc,time_info pair.  You could check if tsc_timestamp is>
tsc, but that won't necessarily work on save/restore/migrate.

Good catch. Doesn't that invalidate rdtscp based vgettimeofday on non-virt as well (assuming p == cpu)?

I suppose that works if you assume that:

    1. every task->vcpu migration is associated with a hv/guest context
       switch, and
    2. every hv/guest context switch is a write barrier

I guess 2 is a given, but I can at least imagine cases where 1 might not
be true.  Maybe.  It all seems very subtle.

What is 1 exactly? task switching to another vcpu? that doesn't incur hypervisor involvement. vcpu moving to another cpu? That does.

And I don't really see a gain.  You avoid maintaining a second version
number, but at the cost of two rdtscps.  In my measurements, the whole
vsyscall takes around 100ns to run, and a single rdtsc takes about 30,
so 30% of total.  Unlike rdtsc, rdtscp is documented as being ordered in
the instruction stream, and so will take at least as long; two of them
will completely blow the vsyscall execution time.

I agree, let's stick with the rdtscpless implementation.

--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>