WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] pciback: deferred handling of pci configuration

To: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] pciback: deferred handling of pci configuration space accesses
From: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
Date: Tue, 25 Apr 2006 10:17:19 +0100
Cc: Ryan <hap9@xxxxxxxxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 25 Apr 2006 02:20:48 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <70f0bc8ca96a3633374ecc6863bb2fd6@xxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <1145886765.7564.7.camel@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <70f0bc8ca96a3633374ecc6863bb2fd6@xxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx

On 25 Apr 2006, at 09:15, Keir Fraser wrote:

 Previously, the virtual configuration space handlers ran in
the same context as the event channel interrupt handler (which was often
atomic if not always atomic). Now the interrupt handler schedules a
callback function (a bottom half) in the system work queue (keventd)
that will get called in process context at a slightly later time. This
allows the handlers in the virtual configuration space to run in process context and to call any core PCI function regardless of whether it will
sleep or not.

This is okay in principle, but I found the op_in_progress counter rather confusing and I'm not sure why it's needed? If it's to prevent a double schedule_work() call on a single PCI request then I'm not sure that it's watertight. Does it need to be?

Let me be a bit more specific here: I think that if an interrupt is delivered after the work function has incremented op_in_progress, but before it clears _PCIF_active, then work can be scheduled erroneously because the IRQ handler will see atomic_dec_and_test() return TRUE.

If serialised execution of pci requests is important, and it looks like it is, I think the simplest solution is simply to create your own single-threaded workqueue and queue_work() onto that. Personally I get worried about using the shared workqueues anyway, as they're another shared resource to deadlock on.

 -- Keir


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel