WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [PATCH 0/6] x86: cpuidle overheads reduction

To: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] [PATCH 0/6] x86: cpuidle overheads reduction
From: "Wei, Gang" <gang.wei@xxxxxxxxx>
Date: Thu, 17 Jun 2010 15:37:10 +0800
Accept-language: zh-CN, en-US
Acceptlanguage: zh-CN, en-US
Cc: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, "Wei, Gang" <gang.wei@xxxxxxxxx>
Delivery-date: Thu, 17 Jun 2010 00:38:51 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcsN7+V4ETB0/9aUR2moq0AOCz4cFA==
Thread-topic: [PATCH 0/6] x86: cpuidle overheads reduction
Experiments shows that for systems with more than 64 logical cpus and without 
always running apic timer, if the interrupt rate raise to several thousands Hz 
per cpu, the deep C-state entry/exit overheads rise a lot, from several percent 
to over 50%. This is mainly resulted by the deep C-state wakeup logic - one 
hpet channel need to be used for waking up a lot of cpus.

We used to try shorten the hpet channel spinlock holding time to reduce the 
racing cost around hpet channel. But it is still not enough for 64 logical cpus 
case.

This patchset fixes 2 obvious little bugs in cpuidle code, uses stime to count 
c-state residency in NONSTOP_TSC case, remove hpet access in 
hpet_broadcast_exit, and redirect some hpet lock users to a new rwlock. 

For a special simulated mass breakevent case, this patchset can reduce cpuidle 
overhead from >50% to <15%, increasing C3 residency from 30% to > 60%.

[PATCH1/6] cpuidle: fix wrapped ticks calculation for pm timer
[PATCH2/6] cpuidle: reduce redundant cost in cstate_restore_tsc for nonstop tsc
[PATCH3/6] cpuidle: use stime to count c-state residency in NONSTOP_TSC case
[PATCH4/6] cpuidle: remove hpet access in hpet_broadcast_exit
[PATCH5/6] cpuidle: redirect some hpet lock users to a new cpumask_lock
[PATCH6/6] cpuidle: redefine cpumask_lock as rwlock_t

Jimmy
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-devel] [PATCH 0/6] x86: cpuidle overheads reduction, Wei, Gang <=