WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Very large value from get_nsec_offset() in timer_interrupt

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] Very large value from get_nsec_offset() in timer_interrupt
From: Ryan Harper <ryanh@xxxxxxxxxx>
Date: Thu, 20 Apr 2006 17:47:35 -0500
Delivery-date: Thu, 20 Apr 2006 15:47:59 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.6+20040907i
I've been trying to debug live-locking of some 64-bit paravirt guests
for the past week and after finally getting gdb to give a backtrace,
I've been able to do some investigation.

The guest makes no forward progress as seen from the user's perspective
(console provides no output), but the kernel is always running and
consuming cpu time according to xm list.

Examining the vcpu context I was able to glean that it was usually
processing time of some sort, either in timer_interrupt, or do_timer
(more often).  After looking at some xentrace records, everything seemed
to be fine from Xen's perspective in that I was seeing timer interrupts
on the proper periodic schedule.  This left me with the guest.  

I attached to a live-locked guest and got the following backtrace:

(gdb) bt
#0  0xffffffff8014e296 in softlockup_tick (regs=0xffff88002efdfd48) at 
kernel/softlockup.c:52
#1  0xffffffff80134cb9 in do_timer (regs=0xffff88002efdfd48) at 
kernel/timer.c:947
#2  0xffffffff8010f325 in timer_interrupt (irq=788397384, dev_id=0x989680, 
regs=0xffff88002efdfd48)
    at arch/x86_64/kernel/../../i386/kernel/time-xen.c:674
#3  0xffffffff8014e5b9 in handle_IRQ_event (irq=256, regs=0xffff88002efdfd48, 
action=0xffff880001dfe600)
    at kernel/irq/handle.c:88
#4  0xffffffff8014e6b2 in __do_IRQ (irq=256, regs=0xffff88002efdfd48) at 
kernel/irq/handle.c:173
#5  0xffffffff8010d6ae in do_IRQ (regs=0xffff88002efdfd48) at 
arch/x86_64/kernel/irq-xen.c:105
#6  0xffffffff80249c50 in evtchn_do_upcall (regs=0xffff88002efdfd48) at 
drivers/xen/core/evtchn.c:215
#7  0xffffffff8010b87e in do_hypervisor_callback ()
#8  0xffff88002efdfd48 in ?? ()
#9  0x0000000000000000 in ?? ()

Jumping into frame 2:

(gdb) frame 2
#2  0xffffffff8010f325 in timer_interrupt (irq=788397384, dev_id=0x989680, 
regs=0xffff88002efdfd48)
    at arch/x86_64/kernel/../../i386/kernel/time-xen.c:674
674                     do_timer(regs);


Which is in this loop (with some of my hackery):
(gdb) list
669             ticks = 0ULL;
670             while (delta >= NS_PER_TICK) {
671                     ticks++;
672                     delta -= NS_PER_TICK;
673                     processed_system_time += NS_PER_TICK;
674                     do_timer(regs);
675             }

Checking out delta:

(gdb) p delta
$5 = 6139533539238523629

>From the source code, NS_PER_TICK is 1000000000LL/HZ.

With those values, it is no surprise why the guest is always busy.
Adding some more debugging where we set delta, I captured the two values
used to create delta.

(gdb) list timer_interrupt
610     s64 ns_offset;
611     u64 ticks;
612     u64 shadow_system_timestamp;
613
614     irqreturn_t timer_interrupt(int irq, void *dev_id, struct pt_regs *regs)
615     {
616             s64 delta, delta_cpu, stolen, blocked;
617             u64 sched_time;
618             int i, cpu = smp_processor_id();
619             struct shadow_time_info *shadow = &per_cpu(shadow_time, cpu);
(gdb)
620             struct vcpu_runstate_info *runstate = &per_cpu(runstate, cpu);
621
622             write_seqlock(&xtime_lock);
623
624             do {
625                     get_time_values_from_xen();
626
627                     /* Obtain a consistent snapshot of elapsed wallclock 
cycles. */
628                     shadow_system_timestamp = shadow->system_timestamp;
629                     ns_offset = get_nsec_offset(shadow);
(gdb)
630                     //delta = delta_cpu =
631                     //      shadow->system_timestamp + 
get_nsec_offset(shadow);
632                     delta = delta_cpu = shadow_system_timestamp + ns_offset;
633                     delta     -= processed_system_time;
634                     delta_cpu -= per_cpu(processed_system_time, cpu);

The value of the system_timestap and the offset are:
(gdb) p shadow_system_timestamp
$6 = 5295129204996
(gdb) p ns_offset
$7 = 6148883847633798812

ns_offset is calculated from get_nsec_offset().  I'm going to dig
a little further, but I wanted to get what I'm seeing out there.


-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
(512) 838-9253   T/L: 678-9253
ryanh@xxxxxxxxxx

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>