WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] xencommons: kill xenstored when stop xencommons

To: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] xencommons: kill xenstored when stop xencommons
From: Yu Zhiguo <yuzg@xxxxxxxxxxxxxx>
Date: Tue, 22 Jun 2010 14:53:40 +0800
Cc:
Delivery-date: Mon, 21 Jun 2010 23:54:15 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4C20135E.3000609@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4C20135E.3000609@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird 3.0a1 (Windows/2008050715)
Hi,

Yu Zhiguo wrote:
> xenstored should be killed when stop xencommons.
> 

>  do_stop () {
> +     if read 2>/dev/null <$XENSTORED_PIDFILE pid; then
> +             kill $pid
> +             while kill -9 $pid >/dev/null 2>&1; do sleep 0.1; done
> +             rm -f $XENSTORED_PIDFILE
> +     fi
> +


It seems that kill xenstored will get taint message about
'HARDIRQ-safe -> HARDIRQ-unsafe'.
Maybe some fix is needed here...


# service xencommons start
# cat /var/run/xenstore.pid
1446
# kill -9 1446


Jun 22 22:51:10 localhost kernel: 
======================================================
Jun 22 22:51:10 localhost kernel: [ INFO: HARDIRQ-safe -> HARDIRQ-unsafe lock 
order detected ]
Jun 22 22:51:10 localhost kernel: 2.6.31.13 #2
Jun 22 22:51:10 localhost kernel: 
------------------------------------------------------
Jun 22 22:51:10 localhost kernel: xenstored/1446 [HC0[0]:SC0[0]:HE0:SE1] is 
trying to acquire:
Jun 22 22:51:10 localhost kernel: (proc_subdir_lock){+.+...}, at: 
[<ffffffff8119c60f>] xlate_proc_name+0x4c/0xde
Jun 22 22:51:10 localhost kernel:
Jun 22 22:51:10 localhost kernel: and this task is already holding:
Jun 22 22:51:10 localhost kernel: (&port_user_lock){-.....}, at: 
[<ffffffff8131d3fe>] evtchn_release+0x3a/0xb8
Jun 22 22:51:10 localhost kernel: which would create a new lock dependency:
Jun 22 22:51:10 localhost kernel: (&port_user_lock){-.....} -> 
(proc_subdir_lock){+.+...}
Jun 22 22:51:10 localhost kernel:
Jun 22 22:51:10 localhost kernel: but this new dependency connects a 
HARDIRQ-irq-safe lock:
Jun 22 22:51:10 localhost kernel: (&port_user_lock){-.....}
Jun 22 22:51:10 localhost kernel: ... which became HARDIRQ-irq-safe at:
Jun 22 22:51:10 localhost kernel:  [<ffffffff8109915d>] 
__lock_acquire+0x254/0xc0e
Jun 22 22:51:10 localhost kernel:  [<ffffffff81099c05>] lock_acquire+0xee/0x12e
Jun 22 22:51:10 localhost kernel:  [<ffffffff81521f7f>] _spin_lock+0x45/0x8e
Jun 22 22:51:10 localhost kernel:  [<ffffffff8131dbfd>] 
evtchn_interrupt+0x3a/0x13f
Jun 22 22:51:10 localhost kernel:  [<ffffffff810c7dd4>] 
handle_IRQ_event+0x62/0x148
Jun 22 22:51:10 localhost kernel:  [<ffffffff810ca367>] 
handle_level_irq+0x90/0xf9
Jun 22 22:51:10 localhost kernel:  [<ffffffff813151f1>] 
xen_evtchn_do_upcall+0x120/0x1c7
Jun 22 22:51:10 localhost kernel:  [<ffffffff8101637e>] 
xen_do_hypervisor_callback+0x1e/0x30
Jun 22 22:51:10 localhost kernel:  [<ffffffffffffffff>] 0xffffffffffffffff
Jun 22 22:51:10 localhost kernel:
Jun 22 22:51:10 localhost kernel: to a HARDIRQ-irq-unsafe lock:
Jun 22 22:51:10 localhost kernel: (proc_subdir_lock){+.+...}
Jun 22 22:51:10 localhost kernel: ... which became HARDIRQ-irq-unsafe at:
Jun 22 22:51:10 localhost kernel: ...  [<ffffffff810991d1>] 
__lock_acquire+0x2c8/0xc0e
Jun 22 22:51:10 localhost kernel:  [<ffffffff81099c05>] lock_acquire+0xee/0x12e
Jun 22 22:51:10 localhost kernel:  [<ffffffff81521f7f>] _spin_lock+0x45/0x8e
Jun 22 22:51:10 localhost kernel:  [<ffffffff8119c60f>] 
xlate_proc_name+0x4c/0xde
Jun 22 22:51:10 localhost kernel:  [<ffffffff8119d370>] __proc_create+0x53/0x148
Jun 22 22:51:10 localhost kernel:  [<ffffffff8119d75d>] proc_symlink+0x3e/0xc5
Jun 22 22:51:10 localhost kernel:  [<ffffffff81a49c03>] proc_root_init+0x75/0xe0
Jun 22 22:51:10 localhost kernel:  [<ffffffff81a2063b>] start_kernel+0x403/0x44c
Jun 22 22:51:10 localhost kernel:  [<ffffffff81a1f930>] 
x86_64_start_reservations+0xbb/0xd6
Jun 22 22:51:10 localhost kernel:  [<ffffffff81a23e98>] 
xen_start_kernel+0x5e3/0x5ea
Jun 22 22:51:10 localhost kernel:  [<ffffffffffffffff>] 0xffffffffffffffff
Jun 22 22:51:10 localhost kernel:
Jun 22 22:51:10 localhost kernel: other info that might help us debug this:
...


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel