WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [PATCH] Re: [Xen-devel] Xen crash on dom0 shutdown

To: Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, Jan Beulich <jbeulich@xxxxxxxxxx>
Subject: RE: [PATCH] Re: [Xen-devel] Xen crash on dom0 shutdown
From: "Jiang, Yunhong" <yunhong.jiang@xxxxxxxxx>
Date: Wed, 24 Sep 2008 19:31:58 +0800
Accept-language: en-US
Acceptlanguage: en-US
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, "Shan, Haitao" <haitao.shan@xxxxxxxxx>
Delivery-date: Wed, 24 Sep 2008 04:32:29 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C4FFC32C.27684%keir.fraser@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <48DA1D83.76E4.0078.0@xxxxxxxxxx> <C4FFC32C.27684%keir.fraser@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AckeJb2U/BQsZIoYEd2egwAX8io7RQADtsJA
Thread-topic: [PATCH] Re: [Xen-devel] Xen crash on dom0 shutdown
xen-devel-bounces@xxxxxxxxxxxxxxxxxxx <> wrote:
> On 24/9/08 09:59, "Jan Beulich" <jbeulich@xxxxxxxxxx> wrote:
>
>>> Well, this hypercall doesn't do pirq_guest_unbind() on IO-APIC-routed
>>> interrupts either, so I think the problem is wider ranging than just MSI
>>> interrupts. Consider unmap_irq() followed by pirq_guest_unbind() later.
>>> We'll BUG_ON(vector == 0) in the latter function. Looks a bit of a mess to
>>> me. I would say that if there are bindings remaining we should re-direct
>>> the event-channel to a 'no-op' pirq (e.g., -1) and indeed call
>>> pirq_guest_unbind() or similar.
>>
>> How about this one? It doesn't exactly follow the path you suggested,
>> i.e. doesn't mess with event channels, but rather marks the
>> irq<->vector mapping as invalid, allowing only a subsequent event
>> channel unbind (i.e. close) to recover from that state (which seems
>> better in terms of requiring proper discipline in the guest, as it
>> prevents re-using the irq or vector without properly cleaning up).
>
> Yeah, this is the kind of thing I had in mind. I will work on
> this a bit
> more (e.g., need to synchronise on d->evtchn_lock to avoid racing
> EVTCHNOP_bind_pirq; also I'm afraid about leaking MSI vectors on domain
> destruction). Thanks.

Sorry to notice that the vector is not managed at all :$
Currently assign_irq_vector() only checks IO_APIC_VECTOR(irq), while for 
AUTO_ASSIGN situation, there is no management at all.
I'm considering if we can check the irq_desc[vector]'s handler to see if the 
vector has assigned or not.

Also noticed following snipt in  setupOneDevice in 
python/xen/xend/server/pciif.py, I suspect it should have less space before it. 
Also maybe now it is better to be placed under pciback.
           rc = xc.physdev_map_pirq(domid = fe_domid,
                                   index = dev.irq,
                                   pirq  = dev.irq)

>
> -- Keir
>
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel