WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [PATCH] Fix request_module/modprobe deadlock in netfront acc

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] [PATCH] Fix request_module/modprobe deadlock in netfront accelerator
From: Kieran Mansley <kmansley@xxxxxxxxxxxxxx>
Date: Tue, 26 Feb 2008 16:01:42 +0000
Delivery-date: Tue, 26 Feb 2008 08:02:15 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
There would seem to be a potential deadlock in the netfront accelerator
plugin support.  When the configured accelerator changes in xenstore,
netfront tries to load the new plugin using request_module().  It does
this from a workqueue work item.  request_module() will invoke modprobe
which in some circumstances (I'm not sure exactly what - I've not
managed to reproduce it myself) seems to try to flush the workqueue, and
so it deadlocks.   This patch fixes the problem by giving the accel
watch work item its own workqueue, and so modprobe can successfully
flush the system-wide one.

Signed-off-by Kieran Mansley <kmansley@xxxxxxxxxxxxxx>

diff -r 1edfea26a2a9 drivers/xen/netfront/accel.c
--- a/drivers/xen/netfront/accel.c
+++ b/drivers/xen/netfront/accel.c
@@ -60,6 +60,9 @@ static struct list_head accelerators_lis
 /* Lock to protect access to accelerators_list */
 static spinlock_t accelerators_lock;
 
+/* Workqueue to process acceleration configuration changes */
+struct workqueue_struct *accel_watch_workqueue;
+
 /* Mutex to prevent concurrent loads and suspends, etc. */
 DEFINE_MUTEX(accelerator_mutex);
 
@@ -67,12 +70,17 @@ void netif_init_accel(void)
 {
        INIT_LIST_HEAD(&accelerators_list);
        spin_lock_init(&accelerators_lock);
+
+       accel_watch_workqueue = create_workqueue("accel_watch");
 }
 
 void netif_exit_accel(void)
 {
        struct netfront_accelerator *accelerator, *tmp;
        unsigned long flags;
+
+       flush_workqueue(accel_watch_workqueue);
+       destroy_workqueue(accel_watch_workqueue);
 
        spin_lock_irqsave(&accelerators_lock, flags);
 
@@ -156,7 +164,7 @@ static void accel_watch_changed(struct x
        struct netfront_accel_vif_state *vif_state = 
                container_of(watch, struct netfront_accel_vif_state,
                             accel_watch);
-       schedule_work(&vif_state->accel_work);
+       queue_work(accel_watch_workqueue, &vif_state->accel_work);
 }
 
 
@@ -191,7 +199,7 @@ void netfront_accelerator_remove_watch(s
                kfree(vif_state->accel_watch.node);
                vif_state->accel_watch.node = NULL;
 
-               flush_scheduled_work();
+               flush_workqueue(accel_watch_workqueue);
 
                /* Clean up any state left from watch */
                if (vif_state->accel_frontend != NULL) {

Attachment: accel_watch_workqueue
Description: Text Data

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-devel] [PATCH] Fix request_module/modprobe deadlock in netfront accelerator, Kieran Mansley <=