WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: Fix for get_s_time()

To: "dan.magenheimer@xxxxxxxxxx" <dan.magenheimer@xxxxxxxxxx>, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] Re: Fix for get_s_time()
From: Dave Winchell <dwinchell@xxxxxxxxxxxxxxx>
Date: Mon, 28 Apr 2008 14:40:37 -0400
Cc: "Tian, Kevin" <kevin.tian@xxxxxxxxx>, Dave Winchell <dwinchell@xxxxxxxxxxxxxxx>, Ian Pratt <Ian.Pratt@xxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Mon, 28 Apr 2008 11:39:19 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <481612C7.5090900@xxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20080428113957734.00000002360@djm-pc> <481612C7.5090900@xxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla Thunderbird 1.0.7-1.1.fc4 (X11/20050929)
Dan, Keir:

Here is where I stand on the overhead of (hpet)read_64_main_counter()
for the version layered on get_s_time with the max function
compared to a version that goes to the hardware each time.
There are two histograms, each with 100 buckets, each bucket is 64 cycles.
There are 1991 cycles per usec on this box. Bucket 99 contains all
events where overhead >= (99*64) cycles.

Layered on stime the overhead is probably lower on average.
Both histograms are bi-modal, but the going-to-the-hardware one
seems to have a stronger second mode.  As we have discussed, the cost
of going to the hardware could vary quite a bit from platform to platform.

I optimized the code around read_64_main_counter() over stime quite a bit, but
I'm sure there is room for improvement.

-Dave

read_64_main_counter() On stime:

(VMM)  cycles per bucket 64
(VMM)
(VMM)  0: 0 78795 148271 21173 15902 47704 89195 121962
(VMM)  8: 83632 51848 17531 12987 10976 8816 9120 8608
(VMM)  16: 5685 3972 3783 2518 1052 710 608 469
(VMM)  24: 277 159 83 46 34 23 19 16
(VMM)  32: 9 6 7 3 4 8 5 6
(VMM)  40: 9 7 14 13 17 25 22 29
(VMM)  48: 25 19 35 27 30 26 21 23
(VMM)  56: 17 24 12 27 22 18 10 22
(VMM)  64: 19 16 16 16 28 18 23 16
(VMM)  72: 22 22 12 14 21 19 17 19
(VMM)  80: 18 14 10 14 11 12 8 18
(VMM)  88: 16 10 17 14 10 8 11 11
(VMM)  96: 10 10 0 175

read_64_main_counter() Going to the hardware:

(VMM)  cycles per bucket 64
(VMM)
(VMM)  0: 92529 148423 27850 12532 28042 43336 60516 59011
(VMM)  8: 36895 14043 8162 6857 7794 7401 5099 2986
(VMM)  16: 1636 1066 796 592 459 409 314 248
(VMM)  24: 206 195 138 97 71 45 35 34
(VMM)  32: 33 36 40 40 25 26 25 26
(VMM)  40: 37 23 18 30 27 30 34 44
(VMM)  48: 38 19 25 23 23 25 21 27
(VMM)  56: 28 24 43 80 220 324 568 599
(VMM)  64: 610 565 611 699 690 846 874 788
(VMM)  72: 703 542 556 613 605 603 559 500
(VMM)  80: 485 493 512 578 561 594 575 614
(VMM)  88: 759 851 895 856 807 770 719 958
(VMM)  96: 1127 1263 0 18219



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel