WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] pciback: deferred handling of pci configuration

On Tue, 2006-04-25 at 10:17 +0100, Keir Fraser wrote:
> On 25 Apr 2006, at 09:15, Keir Fraser wrote:
> >
> > This is okay in principle, but I found the op_in_progress counter 
> > rather confusing and I'm not sure why it's needed? If it's to prevent 
> > a double schedule_work() call on a single PCI request then I'm not 
> > sure that it's watertight. Does it need to be?
> 
> Let me be a bit more specific here: I think that if an interrupt is 
> delivered after the work function has incremented op_in_progress, but 
> before it clears _PCIF_active, then work can be scheduled erroneously 
> because the IRQ handler will see atomic_dec_and_test() return TRUE.

Maybe you're seeing something that I'm missing so let me explain what I
think about this section of code. And please point out where you think
I'm mistaken or have made an incorrect assumption.

I added the counter to act as a lock to guarantee that each request from
the frontend would complete one at a time. This way, virtual
configuration space handlers can be guaranteed to never be running
concurrently on the same device. And they never should be because there
is only one op structure in the shared memory. When I do the atomic_inc
after completing the configuration space access, I don't believe it
would matter if the frontend triggers an interrupt again (before
PCIF_active is cleared). In this case, the backend would be causing
another request to be processed and overwriting the return value from
the previous op. I don't think that work would be scheduled erroneously
in this case because by triggering an interrupt with the _PCIF_active
bit still set, the frontend is basically placing a second request (but
is only affecting itself and its device by executing the request again).

I can't put the atomic_inc after I clear the _PCIF_active bit as then
the frontend could try and immediately turn around and send a second
legitimate request that would fail because op_in_progress may not have
decremented yet.

> If serialised execution of pci requests is important, and it looks like 
> it is, I think the simplest solution is simply to create your own 
> single-threaded workqueue and queue_work() onto that. Personally I get 
> worried about using the shared workqueues anyway, as they're another 
> shared resource to deadlock on.
> 

Can you give me an example of how you think I could be deadlocking on
the workqueue? I believe I'm interacting with it correctly and a quick
glance at the kernel code for workqueues themselves shows only a single
lock per workqueue. Granted, other driver/kernel code that may run in
the workqueue could deadlock that thread, but I don't see how this is
any different from other kernel bugs that could take down the kernel (I
hope that someday the PCI Backend will run isolated in its own driver
domain). It seemed like overkill to me to create my own work queue and
kernel thread that would sit and wait for pci configuration space
requests when they will most likely be few and far between for most
devices.

Ryan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel