This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] pt_irq_time_out() dropping d->event_lock before calling

To: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] pt_irq_time_out() dropping d->event_lock before calling pirq_guest_eoi()
From: "Jan Beulich" <JBeulich@xxxxxxxxxx>
Date: Fri, 08 Apr 2011 12:29:00 +0100
Delivery-date: Fri, 08 Apr 2011 04:35:19 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4D9F0320020000780003A917@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4D9F0320020000780003A917@xxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>>> On 08.04.11 at 12:44, "Jan Beulich" <JBeulich@xxxxxxxxxx> wrote:
> What is the reason for this? irq_desc's lock nests inside d->event_lock,
> and not having to drop the lock early would not only allow the two loops
> to be folded, but also to call a short cut version of pirq_guest_eoi()
> that already obtained the pirq->irq mapping (likely going to be created
> when splitting the d->nr_pirqs sized arrays I'm working on currently).

Actually, this is the only case where pirq_guest_eoi() gets called without
holding d->event_lock, so it rather smells like a mistake; I'll go forward
folding the second loop into the first, and only undo if there's an actual
reason for how things are currently.


Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>