This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] RE: [Xen-changelog] [xen-unstable] x86: Properly synchronise

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, "Cui, Dexuan" <dexuan.cui@xxxxxxxxx>, 'Jan Beulich' <jbeulich@xxxxxxxxxx>
Subject: [Xen-devel] RE: [Xen-changelog] [xen-unstable] x86: Properly synchronise updates to pirq-to-vector mapping.
From: "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx>
Date: Fri, 26 Sep 2008 17:34:43 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: "'xen-devel@xxxxxxxxxxxxxxxxxxx'" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 26 Sep 2008 02:35:28 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C5024693.1D9C2%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <F4AE3CDE26E0164D9E990A34F2D4E0DF0887908D12@xxxxxxxxxxxxxxxxxxxxxxxxxxxx> <C5024693.1D9C2%keir.fraser@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AckfhtyZ1sC3KdTCQACI/vjsHV8zYQACrxSgAATpBOkABWjicA==
Thread-topic: [Xen-changelog] [xen-unstable] x86: Properly synchronise updates to pirq-to-vector mapping.
Yes, I'm trying to fix this issue.

Yunhong Jiang

Keir Fraser <mailto:keir.fraser@xxxxxxxxxxxxx> wrote:
> On 26/9/08 05:44, "Cui, Dexuan" <dexuan.cui@xxxxxxxxx> wrote:
>> @@ -491,16 +512,15 @@ int pirq_guest_bind(struct vcpu *v, int
>>      int                 rc = 0;
>>      cpumask_t           cpumask = CPU_MASK_NONE;
>> +    WARN_ON(!spin_is_locked(&v->domain->evtchn_lock));
>> I find this WARN_ON() is triggered harmlessly when I assign device to HVM
>> guest. The call trace is XEN_DOMCTL_bind_pt_irq ->
>> pt_irq_create_bind_vtd() -> pirq_guest_bind(). Should we remove the
>> WARN_ON here?
> I put in that WARN_ON() deliberately, because I think HVM
> pt_irq locking
> needs some work. Yunhong Jiang is looking into it I believe (cc'ed).
> Obviously the current situation is temporary -- either the
> locking in the
> passthrough code will be fixed as I envisage, or we'll agree
> some other fix
> and the WARN_ON() will be removed or changed. Appended is a
> relevant section
> of en email I sent a couple of days ago.
> -- Keir
> ------------------
> * I decided to WARN_ON(!spin_is_locked(&d->evtchn_lock)) in
> pirq_guest_[un]bind(). The reason is that in any case those
> functions do not
> expect to be re-entered -- they really want to be per-domain serialised.
> Furthermore I am pretty certain that the HVM passthrough code is not
> synchronising properly with changes to the pirq-to-vector
> mapping (it uses
> domain_irq_to_vector() in many places with no care for
> locking) nor is it
> synchronised with other users of pirq_guest_bind() --- a reasonable
> semantics here would be that a domain pirq can be bound to
> once, either via
> an event channel, or through a virtual PIC in HVM emulation context. I
> therefore think that careful locking is required -- it may
> suffice to get
> rid of (or at least make less use of) the hvm_domain.irq_lock
> and replace
> its use with evtchn_lock (only consideration is that the latter is not
> IRQ-safe). The WARN_ON() is a nice reminder of work to be done here.

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>