WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] RE: rdtsc: correctness vs performance on Xen (and KVM?)

To: "Keir Fraser" <keir.fraser@xxxxxxxxxxxxx>, "Jeremy Fitzhardinge" <jeremy@xxxxxxxx>
Subject: Re: [Xen-devel] RE: rdtsc: correctness vs performance on Xen (and KVM?)
From: "Jan Beulich" <JBeulich@xxxxxxxxxx>
Date: Thu, 03 Sep 2009 09:23:19 +0100
Cc: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>, "Xen-Devel \(E-mail\)" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Alan Cox <alan@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Thu, 03 Sep 2009 01:23:55 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4A9EEC3D.4070402@xxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <C6C4A72F.13BDD%keir.fraser@xxxxxxxxxxxxx> <4A9EEC3D.4070402@xxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>>> Jeremy Fitzhardinge <jeremy@xxxxxxxx> 03.09.09 00:05 >>>
>   1. Add a hypercall to set the desired location of the clock
>      correction info rather than putting it in the shared-info area
>      (akin to vcpu placement).  KVM already has this; they write the
>      address to a magic MSR.

But this is already subject to placement, as it's part of the vcpu_info
structure. While of course you don't want to make the whole vcpu_info
visible to guests, it would seem awkward to further segregate the
shared_info pieces. I'd rather consider adding a second (optional) copy
of it, since the updating of this is rather little overhead in Xen, but
using this in the kernel time handling code would eliminate the
potential for accessing all the vcpu_info fields using percpu_read().

>   2. Pack all the clock structures into a single page, indexed by vcpu
>      number

That adds a scalability issue, albeit a relatively light one: You shouldn't
anymore assume there's a limit on the number of vCPU-s. 

>   3. Map that RO into userspace via fixmap, like the vsyscall page itself
>   4. Use the lsl trick to get the current vcpu to index into the array,
>      then compute a time value using tsc with corrections; iterate if
>      version stamp changes under our feet.
>   5. On context switch, the kernel would increment the version of the
>      *old* vcpu clock structure, so that when the usermode code
>      re-checks the version at the end of its time calculation, it can
>      tell that it has a stale vcpu and it needs to iterate with a new
>      vcpu+clock structure

I don't think you can re-use the hypervisor updated version field here,
unless you add a protocol on how the two updaters avoid collision.
struct vcpu_time_info has a padding field, which might be designated
as guest-kernel-version.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>