This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] million cycle interrupt

To: dan.magenheimer@xxxxxxxxxx, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, "Tian, Kevin" <kevin.tian@xxxxxxxxx>, "Xen-Devel (E-mail)" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] million cycle interrupt
From: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>
Date: Tue, 14 Apr 2009 17:10:55 +0000 (GMT)
Delivery-date: Tue, 14 Apr 2009 10:11:57 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <a4e61dfc-f523-4f3b-b24c-2fcb8ba0b3ac@default>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
> I'll take a look at that next.

It appears that the call to smp_timer_broadcast_ipi()
in timer_interrupt() is the cycle hog.  And it
definitely appears to be a scaleability problem!

maxcpus=4: avg=1600, max=15000 (cycles, rounded)
maxcpus=5: avg=2000, max=24000
maxcpus=6: avg=83000, max=244000
maxcpus=7: avg=198000, max=780000
maxcpus=8: avg=310000, max=1027000

The load is a 4vcpu PV EL5u2 32-bit domain continually
compiling linux-2.6.28 with -j80.  I killed the load
after only a few minutes, so the max might get worse.
On the other hand, just booting dom0 seems to put
max in about the same range.


> -----Original Message-----
> From: Dan Magenheimer 
> Sent: Tuesday, April 14, 2009 9:39 AM
> To: Keir Fraser; Tian, Kevin; Xen-Devel (E-mail)
> Subject: RE: [Xen-devel] million cycle interrupt
> > You could validate that quite easily by adding your own timer 
> > read wrapped
> > in TSC reads. Actually a PIT read, even though it requires 
> > multiple accesses
> > over the ISA bus, should take less than 10us.
> I'll take a look at that next.  Taking average as
> well as max, the timer_interrupt interrupt is
> AVERAGING over 300K cycles (with 8 processors).
> This interrupt is 100Hz, correct?  If so, that means
> a full 1% of one processor's cycles are spent
> processing timer_interrupts!
> The MSI interrupts have large max, but the average
> is relatively small (under 5000 cycles). It still
> would be nice to know what is causing the max
> value, especially since it only occurs for me with
> more than 4 processors.
> But first, Keir, would you take this patch so that this
> kind of issue can be more easily diagnosed in the future?
> If you're worried about the extra two global variable
> reads, you could wrap an "#ifndef NDEBUG" around it
> (with #else #define opt_measure_irqs 0), but it might
> be nice to have this diagnostic capability in released
> code.

Xen-devel mailing list