WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH][RFC] fix some spinlock issues in vmsi

To: Kouya Shimura <kouya@xxxxxxxxxxxxxx>
Subject: [Xen-devel] Re: [PATCH][RFC] fix some spinlock issues in vmsi
From: Qing He <qing.he@xxxxxxxxx>
Date: Fri, 6 Mar 2009 13:31:17 +0800
Cc: Isaku Yamahata <yamahata@xxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>
Delivery-date: Thu, 05 Mar 2009 21:30:23 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <7kiqmnfjhx.fsf@xxxxxxxxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <7kiqmnfjhx.fsf@xxxxxxxxxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.17+20080114 (2008-01-14)
On Fri, 2009-03-06 at 10:55 +0800, Kouya Shimura wrote:
> Hi,
> 
> This patch fixes some spinlock issues related to changeset:
>  19246:9bc5799566be "passthrough MSI-X mask bit acceleration"
>  19263:9c5b4efc934d "hvm: passthrough MSI-X: fix ia64 link and MSI-X clean up"
> 
> Without this patch, I got:
> (XEN) Assertion '_spin_is_locked(&pcidevs_lock)' failed at vmsi.c:389
> or
> (XEN) Xen BUG at spinlock.c:25
> 
> 
> I'm not sure this fix is right. Please review it.
> 
> - d->arch.hvm_domain.msixtbl_list_lock is redundant.
>   irq_desc->lock covers it, thus remove the spinlock.

No, it's not redundant. msixtbl_list_lock is to protect msixtbl_list,
which may contain entries for multiple irqs, consider if two
pt_bind_irq are called simultaneously, irq_desc->lock wouldn't be
enough to protect the list.

> 
> - ASSERT(spin_is_locked(&pcidevs_lock)) is not needed.
>   At first, I intended to add the enclosure of pcidevs_lock.
>   But this assertion seems pointless. pcidevs_lock is
>   for alldevs_list and msixtbl_pt_xxx functions never refer it.

I was thinking that pcidevs_lock is already held for msixtbl_pt_xxx, but
obviously I was wrong:-) However, it's not that pointless, msixtbl_pt_xxx
refers to the content of pci_dev, the use of pcidevs_lock is intended to
protect it against destructive modification, like an improper
PHYDEVOP_manage_pci_remove. It may not be neccessary for a well-written
device model, but that's something we can't rely on.

A similar problem also exists in msixtbl_write/msixtbl_read. Currently,
these functions access the mapped msix table using the pointer stored
in pci_dev, and when pci_dev is removed without the knowledge of the
guest, pci_dev alongside the fixmaps will be removed and this
definitely will frustrate these functions. Unfortunately, adding
spinlocks to msixtbl_{read,write} seems quite expensive, especially for
multiple guest threads, that's why RCU based lock is used here. For a
completely secure handling of this, maybe we need a less brute,
asynchronous pci_remove hypercall or something.

> 
> - In "debug=y" environment, xmalloc must not be called from both
>   irq_enable and irq_disable state. Otherwise, 
>   the assertion failure occurs in check_lock().
>   So, in msixtbl_pt_register, xmalloc is called beforehand.
>   I thinks this is ugly, though.

This is fine to me, thank you for pointing out this.

Thanks,
Qing
> 
> 
> Thanks,
> Kouya
> 
> Signed-off-by: Kouya Shimura <kouya@xxxxxxxxxxxxxx>
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel