|
|
|
|
|
|
|
|
|
|
xen-bugs
[Xen-bugs] [Bug 690] New: Dom0: BUG: soft lockup detected on CPU#0!
http://bugzilla.xensource.com/bugzilla/show_bug.cgi?id=690
Summary: Dom0: BUG: soft lockup detected on CPU#0!
Product: Xen
Version: 3.0.2
Platform: x86-64
OS/Version: Linux
Status: NEW
Severity: normal
Priority: P2
Component: Unspecified
AssignedTo: xen-bugs@xxxxxxxxxxxxxxxxxxx
ReportedBy: tethys@xxxxxxxxx
I get these appearing in my /var/log/messages:
<3>BUG: soft lockup detected on CPU#0!
Jun 23 15:42:53 shelbyville kernel: CPU 0:
Jun 23 15:42:53 shelbyville kernel: Modules linked in: bridge nfsd exportfs
lockd sunrpc ipv6 xt_state ip_conntrack nfnetlink xt_tcpudp ipt_LOG ipt_REJECT
xt_physdev iptable_filter ip_tables x_tables video thermal processor
fan container button battery ac ohci_hcd ehci_hcd i2c_nforce2 i2c_core tg3
e100 mii floppy dm_snapshot dm_zero dm_mirror ext3 jbd dm_mod sata_nv 3w_9xxx
Jun 23 15:42:53 shelbyville kernel: Pid: 0, comm: swapper Not tainted
2.6.16-xen #1
Jun 23 15:42:53 shelbyville kernel: RIP: e030:[<ffffffff8010722a>]
<ffffffff8010722a>{hypercall_page+554}
Jun 23 15:42:53 shelbyville kernel: RSP: e02b:ffffffff80436cc8 EFLAGS:
00000246
Jun 23 15:42:53 shelbyville kernel: RAX: 0000000000030000 RBX: 00000000000920e5
RCX: ffffffff8010722a
Jun 23 15:42:53 shelbyville kernel: RDX: ffffffff80446ac1 RSI: 0000000000000000
RDI: 0000000000000000
Jun 23 15:42:53 shelbyville kernel: RBP: 00000000000920e5 R08: 00000000fffffffe
R09: 0000000000000004
Jun 23 15:42:53 shelbyville kernel: R10: 00000000ffffffff R11: 0000000000000246
R12: 0000000000000000
Jun 23 15:42:53 shelbyville kernel: R13: ffffffffffff8000 R14: 0000000000000004
R15: 0000000000000000
Jun 23 15:42:53 shelbyville kernel: FS: 00002b2f3c87f520(0000)
GS:ffffffff8049d000(0000) knlGS:0000000000000000
Jun 23 15:42:53 shelbyville kernel: CS: e033 DS: 0000 ES: 0000
Jun 23 15:42:53 shelbyville kernel:
Jun 23 15:42:53 shelbyville kernel: Call Trace: <IRQ>
<ffffffff8012efac>{__call_console_drivers+76}
Jun 23 15:42:53 shelbyville kernel:
<ffffffff80250eda>{force_evtchn_callback+10}
<ffffffff8012f2ea>{release_console_sem+378}
Jun 23 15:42:53 shelbyville kernel: <ffffffff8012f621>{vprintk+689}
<ffffffff8010ba0a>{do_hypervisor_callback+30}
Jun 23 15:42:53 shelbyville kernel: <ffffffff8012f6dd>{printk+141}
<ffffffff8010722a>{hypercall_page+554}
Jun 23 15:42:53 shelbyville kernel:
<ffffffff80250eda>{force_evtchn_callback+10}
<ffffffff8012f327>{release_console_sem+439}
Jun 23 15:42:53 shelbyville kernel: <ffffffff8012f621>{vprintk+689}
<ffffffff880db3da>{:ipt_LOG:dump_packet+986}
Jun 23 15:42:53 shelbyville kernel: <ffffffff8012f6dd>{printk+141}
<ffffffff801388d9>{lock_timer_base+41}
Jun 23 15:42:53 shelbyville kernel:
<ffffffff880d80bd>{:ipt_REJECT:reject+173}
<ffffffff80112ddb>{dma_map_single+123}
Jun 23 15:42:53 shelbyville kernel:
<ffffffff8806c59a>{:e100:e100_exec_cmd+170}
<ffffffff880dbadd>{:ipt_LOG:ipt_log_packet+381}
Jun 23 15:42:53 shelbyville kernel:
<ffffffff880dbb8b>{:ipt_LOG:ipt_log_target+107}
<ffffffff880cf334>{:ip_tables:ipt_do_table+772}
Jun 23 15:42:53 shelbyville kernel: <ffffffff802fdc9a>{nf_iterate+90}
<ffffffff881b5050>{:bridge:br_nf_forward_finish+0}
Jun 23 15:42:53 shelbyville kernel: <ffffffff802fdd5d>{nf_hook_slow+125}
<ffffffff881b5050>{:bridge:br_nf_forward_finish+0}
Jun 23 15:42:53 shelbyville kernel:
<ffffffff881b52ca>{:bridge:br_nf_forward_ip+346}
<ffffffff802fdc9a>{nf_iterate+90}
Jun 23 15:42:53 shelbyville kernel:
<ffffffff881afff0>{:bridge:br_forward_finish+0}
<ffffffff802fdd5d>{nf_hook_slow+125}
Jun 23 15:42:53 shelbyville kernel:
<ffffffff881afff0>{:bridge:br_forward_finish+0}
<ffffffff881b011c>{:bridge:__br_forward+92}
Jun 23 15:42:53 shelbyville kernel:
<ffffffff881b0e9b>{:bridge:br_handle_frame_finish+235}
Jun 23 15:42:53 shelbyville kernel:
<ffffffff881b4711>{:bridge:br_nf_pre_routing_finish+913}
Jun 23 15:42:53 shelbyville kernel: <ffffffff802fdc9a>{nf_iterate+90}
<ffffffff881b4380>{:bridge:br_nf_pre_routing_finish+0}
Jun 23 15:42:53 shelbyville kernel: <ffffffff802fdd5d>{nf_hook_slow+125}
<ffffffff881b4380>{:bridge:br_nf_pre_routing_finish+0}
Jun 23 15:42:53 shelbyville kernel:
<ffffffff881b4fbd>{:bridge:br_nf_pre_routing+2109}
<ffffffff802fdc9a>{nf_iterate+90}
Jun 23 15:42:54 shelbyville kernel:
<ffffffff881b0db0>{:bridge:br_handle_frame_finish+0}
Jun 23 15:42:54 shelbyville kernel: <ffffffff802fdd5d>{nf_hook_slow+125}
<ffffffff881b0db0>{:bridge:br_handle_frame_finish+0}
Jun 23 15:42:54 shelbyville kernel:
<ffffffff881b1076>{:bridge:br_handle_frame+374}
<ffffffff802e3d52>{netif_receive_skb+578}
Jun 23 15:42:54 shelbyville kernel:
<ffffffff8806e4b0>{:e100:e100_poll+784} <ffffffff802e40ef>{net_rx_action+239}
Jun 23 15:42:54 shelbyville kernel: <ffffffff801345a3>{__do_softirq+131}
<ffffffff8010beda>{call_softirq+30}
Jun 23 15:42:54 shelbyville kernel: <ffffffff8010dc97>{do_softirq+71}
<ffffffff8010dae9>{do_IRQ+73}
Jun 23 15:42:54 shelbyville kernel:
<ffffffff80250fad>{evtchn_do_upcall+205}
<ffffffff8010ba0a>{do_hypervisor_callback+30} <EOI>
Jun 23 15:42:54 shelbyville kernel:
<ffffffff801073aa>{hypercall_page+938} <ffffffff801073aa>{hypercall_page+938}
Jun 23 15:42:54 shelbyville kernel: <ffffffff80108f02>{xen_idle+130}
<ffffffff8010902a>{cpu_idle+234}
Jun 23 15:42:54 shelbyville kernel: <ffffffff804b575a>{start_kernel+506}
<ffffffff804b51b2>{_sinittext+434}
<3>
BUG: soft lockup detected on CPU#0!
Jun 23 17:43:18 shelbyville kernel: CPU 0:
Jun 23 17:43:18 shelbyville kernel: Modules linked in: bridge nfsd exportfs
lockd sunrpc ipv6 xt_state ip_conntrack nfnetlink xt_tcpudp ipt_LOG ipt_REJECT
xt_physdev iptable_filter ip_tables x_tables video thermal processor
fan container button battery ac ohci_hcd ehci_hcd i2c_nforce2 i2c_core tg3
e100 mii floppy dm_snapshot dm_zero dm_mirror ext3 jbd dm_mod sata_nv 3w_9xxx
Jun 23 17:43:18 shelbyville kernel: Pid: 3, comm: ksoftirqd/0 Not tainted
2.6.16-xen #1
Jun 23 17:43:18 shelbyville kernel: RIP: e030:[<ffffffff8010722a>]
<ffffffff8010722a>{hypercall_page+554}
Jun 23 17:43:18 shelbyville kernel: RSP: e02b:ffffffff80436d58 EFLAGS:
00000246
Jun 23 17:43:18 shelbyville kernel: RAX: 0000000000030000 RBX: 000000000011bf4f
RCX: ffffffff8010722a
Jun 23 17:43:18 shelbyville kernel: RDX: ffffffff80448926 RSI: 0000000000000000
RDI: 0000000000000000
Jun 23 17:43:18 shelbyville kernel: RBP: 000000000011bf4f R08: 00000000fffffffe
R09: 0000000000000009
Jun 23 17:43:18 shelbyville kernel: R10: 00000000ffffffff R11: 0000000000000246
R12: 0000000000000000
Jun 23 17:43:18 shelbyville kernel: R13: ffffffffffff8000 R14: 0000000000000009
R15: 0000000000000000
Jun 23 17:43:18 shelbyville kernel: FS: 00002ad193661b00(0000)
GS:ffffffff8049d000(0000) knlGS:0000000000000000
Jun 23 17:43:18 shelbyville kernel: CS: e033 DS: 0000 ES: 0000
Jun 23 17:43:18 shelbyville kernel:
Jun 23 17:43:18 shelbyville kernel: Call Trace: <IRQ>
<ffffffff8012efac>{__call_console_drivers+76}
Jun 23 17:43:18 shelbyville kernel:
<ffffffff80250eda>{force_evtchn_callback+10}
<ffffffff8012f2ea>{release_console_sem+378}
Jun 23 17:43:18 shelbyville kernel: <ffffffff8012f621>{vprintk+689}
<ffffffff8010ba0a>{do_hypervisor_callback+30}
Jun 23 17:43:18 shelbyville kernel: <ffffffff8012f6dd>{printk+141}
<ffffffff8010722a>{hypercall_page+554}
Jun 23 17:43:18 shelbyville kernel:
<ffffffff80250eda>{force_evtchn_callback+10}
<ffffffff8012f327>{release_console_sem+439}
Jun 23 17:43:18 shelbyville kernel: <ffffffff8012f621>{vprintk+689}
<ffffffff880db382>{:ipt_LOG:dump_packet+898}
Jun 23 17:43:18 shelbyville kernel: <ffffffff8012f6dd>{printk+141}
<ffffffff801388d9>{lock_timer_base+41}
Jun 23 17:43:18 shelbyville kernel:
<ffffffff880d80bd>{:ipt_REJECT:reject+173}
<ffffffff80112ddb>{dma_map_single+123}
Jun 23 17:43:18 shelbyville kernel:
<ffffffff8806c59a>{:e100:e100_exec_cmd+170}
<ffffffff880dbadd>{:ipt_LOG:ipt_log_packet+381}
Jun 23 17:43:18 shelbyville kernel:
<ffffffff880dbb8b>{:ipt_LOG:ipt_log_target+107}
<ffffffff880cf334>{:ip_tables:ipt_do_table+772}
Jun 23 17:43:18 shelbyville kernel: <ffffffff802fdc9a>{nf_iterate+90}
<ffffffff881b5050>{:bridge:br_nf_forward_finish+0}
Jun 23 17:43:18 shelbyville kernel: <ffffffff802fdd5d>{nf_hook_slow+125}
<ffffffff881b5050>{:bridge:br_nf_forward_finish+0}
Jun 23 17:43:18 shelbyville kernel:
<ffffffff881b52ca>{:bridge:br_nf_forward_ip+346}
<ffffffff802deeb7>{skb_checksum+87}
Jun 23 17:43:18 shelbyville kernel: <ffffffff802fdc9a>{nf_iterate+90}
<ffffffff881afff0>{:bridge:br_forward_finish+0}
Jun 23 17:43:18 shelbyville kernel: <ffffffff802fdd5d>{nf_hook_slow+125}
<ffffffff881afff0>{:bridge:br_forward_finish+0}
Jun 23 17:43:18 shelbyville kernel:
<ffffffff881b011c>{:bridge:__br_forward+92}
<ffffffff881b0e9b>{:bridge:br_handle_frame_finish+235}
Jun 23 17:43:18 shelbyville kernel:
<ffffffff881b4711>{:bridge:br_nf_pre_routing_finish+913}
Jun 23 17:43:18 shelbyville kernel: <ffffffff802fdc9a>{nf_iterate+90}
<ffffffff881b4380>{:bridge:br_nf_pre_routing_finish+0}
Jun 23 17:43:18 shelbyville kernel: <ffffffff802fdd5d>{nf_hook_slow+125}
<ffffffff881b4380>{:bridge:br_nf_pre_routing_finish+0}
Jun 23 17:43:18 shelbyville kernel:
<ffffffff881b4fbd>{:bridge:br_nf_pre_routing+2109}
<ffffffff802fdc9a>{nf_iterate+90}
Jun 23 17:43:18 shelbyville kernel:
<ffffffff881b0db0>{:bridge:br_handle_frame_finish+0}
Jun 23 17:43:19 shelbyville kernel: <ffffffff802fdd5d>{nf_hook_slow+125}
<ffffffff881b0db0>{:bridge:br_handle_frame_finish+0}
Jun 23 17:43:19 shelbyville kernel:
<ffffffff881b1076>{:bridge:br_handle_frame+374}
<ffffffff802e3d52>{netif_receive_skb+578}
Jun 23 17:43:19 shelbyville kernel:
<ffffffff8806e4b0>{:e100:e100_poll+784} <ffffffff802e40ef>{net_rx_action+239}
Jun 23 17:43:19 shelbyville kernel: <ffffffff801345a3>{__do_softirq+131}
<ffffffff8010beda>{call_softirq+30} <EOI>
Jun 23 17:43:19 shelbyville kernel: <ffffffff8010dc97>{do_softirq+71}
<ffffffff80134d43>{ksoftirqd+131}
Jun 23 17:43:19 shelbyville kernel: <ffffffff80134cc0>{ksoftirqd+0}
<ffffffff80145419>{kthread+217}
Jun 23 17:43:19 shelbyville kernel: <ffffffff8010bc5e>{child_rip+8}
<ffffffff80145340>{kthread+0}
Jun 23 17:43:19 shelbyville kernel: <ffffffff8010bc56>{child_rip+0}
I would be tempted to say it's related to bug 543. However, this is in Dom0,
where that bug is in DomU.
It seems to coincide with periods of high load on Dom0. However, since Dom0
is used for nothing other than hosting a bunch of DomU guests, I can't see
an obvious reason why it would have high load. My gut feeling is that it's
the soft lockup that's causing the high load, and not vice versa.
Both the Dom0 machine and all of the guest OSes are unresponsive (to the
point of unusability) while this is happening, but they seem to recover
afterwards with no obvious lingering problems.
--
Configure bugmail:
http://bugzilla.xensource.com/bugzilla/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are the assignee for the bug, or are watching the assignee.
_______________________________________________
Xen-bugs mailing list
Xen-bugs@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-bugs
|
<Prev in Thread] |
Current Thread |
[Next in Thread> |
- [Xen-bugs] [Bug 690] New: Dom0: BUG: soft lockup detected on CPU#0!,
bugzilla-daemon <=
|
|
|
|
|