WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] pciback: deferred handling of pci configuration


On 25 Apr 2006, at 14:32, Ryan wrote:

I can't put the atomic_inc after I clear the _PCIF_active bit as then
the frontend could try and immediately turn around and send a second
legitimate request that would fail because op_in_progress may not have
decremented yet.

Okay, I think I see what you mean. But I think the code would be clearer *and* robust against spurious interrupts (event-channel callbacks) by doing the following:

* turn op_in_progress into a proper boolean, but with type 'unsigned long'.

 * in the IRQ handler do:
     if (test_bit(_XEN_PCIF_active, &pdev->...) &&
         !test_and_set_bit(0, &pdev->op_in_progress))
        schedule_work(...);

 * at the very end of pciback_handle_event() do:
     smp_wmb(); /* finish the response /then/ flag the IRQ handler */
     pdev->in_progress = 0;
     smp_mb(); /* flag the IRQ handler /then/ check for more work */
     if (test_bit(_XEN_PCIF_active, &pdev->...) &&
         !test_and_set_bit(0, &pdev->op_in_progress))
         schedule_work(...);

I think this is better than the current implementation of op_in_progress as a counter with zero meaning in-progress, giving clear one-shot-setting semantics for op_in_progress for each queued PCI request.

If serialised execution of pci requests is important, and it looks like
it is, I think the simplest solution is simply to create your own
single-threaded workqueue and queue_work() onto that. Personally I get
worried about using the shared workqueues anyway, as they're another
shared resource to deadlock on.


Can you give me an example of how you think I could be deadlocking on
the workqueue?

No, it was more an aside really. I think the keventd workqueues are totally fine in this case, but I have achieved deadlock loops involving the shared workqueues in my own code in the past (arguably due to stupid use of semaphores though).

 -- Keir


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel