WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH] CPUIDLE: revise tsc-save/restore to avoid big ts

To: "Wei, Gang" <gang.wei@xxxxxxxxx>
Subject: [Xen-devel] Re: [PATCH] CPUIDLE: revise tsc-save/restore to avoid big tsc skew between cpus
From: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Date: Fri, 05 Dec 2008 11:47:42 +0000
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 05 Dec 2008 03:48:11 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <8FED46E8A9CA574792FC7AACAC38FE7701C589A7C8@xxxxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AclWoeM2xc+X6mj6QOaQHsyxDmpzagAFRoNpAACb7OAABH8KUAAA965l
Thread-topic: [PATCH] CPUIDLE: revise tsc-save/restore to avoid big tsc skew between cpus
User-agent: Microsoft-Entourage/12.14.0.081024
On 05/12/2008 11:30, "Wei, Gang" <gang.wei@xxxxxxxxx> wrote:

>> I tried extrapolating from t->stime_local_stamp, cpu_khz, and
>> t->local_tsc_stamp before I got into the current solution. It would still
>> bring accumulating skew, but in a slower increasing speed. I would like to
>> try it again with  t->tsc_scale instead of cpu_khz. If it is works, it would
>> really be simpler. Allow me some time.
> 
> Below patch should be what you expected. It will still bring continuously
> increasing tsc skew. If I pin all domains' vcpus on 1 pcpu, the skew is
> increasing faster.

It looks about right to me, and better than using cpu_khz as in the current
code and the original patch. How much skew does it introduce? Is the skew a
problem? Bearing in mind that from the point of view of HVM guests, TSC
rates will be all over the place anyway if we are using P states (and the
host TSC is not invariant, which is actually the only time your code is
enabled anyway).

 -- Keir



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>