[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v8 2/8] vpci: Refactor REGISTER_VPCI_INIT


  • To: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • From: "Chen, Jiqian" <Jiqian.Chen@xxxxxxx>
  • Date: Fri, 25 Jul 2025 08:22:22 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=rH/f+CHGD2JdLy9n3HnDpEupUsb6wHrsO4LdtTflagM=; b=gyTIMVCEvMLCtJ076NPjS7OSXMluzlGbOt0Wgel1vdi2PTVeeyjKGSiuc8OtBYIb5lcPM5+SxshrAaUVuYSLOM+GmLqzqytvIUMTcEV56sH1GWaKiwam7c6n+ZhKtLkwEqWw+SAU+JNsvQYul+F4ABGEBv4mGVaalcKxLA1tfwjOL5T74VuqxjjCb1LUAIklrKE8O3KmmZfZYs+cO6ewkmKu4XtF7rz1yPsKZaqWG0P6wIILg6l5yfa5PYyZqaqclx2Z6VvDXP3fd+Z4RjQ6Qh+vrfLq92JGCI+Ywn9sGpGwq/gCBnQH/xw6mnjHKPgF9tW9CdVspb5Hw6qsfrAgNw==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=Y83QrIOLtbwZCmmhCrvZkSyFTT8nS7BEdxm42p1YObeDBBwgWYUhe61nFr29wvhnvwGrrG/7H+pXlrxgGt8/hWnZqFXwh3LOupC9B1YISUWotPa0R5+/8RECXTMLz3d78yCZG3QzdRyjrIBgLqAEAHbIAeX7vqfPfsrGqR2kOwCydfIltnhGUf8lmYV0Iu6sCU5sl3GIYZELaaeyS84gVfR2JEh6q/qzLzmQMGZMWYB54Eq5sNqm5vxaplzz3F9W/H2yBAbWac2RRoZQakfYK6kAI6PA3Q9u/SicqNK+MEQTrpQidEsuxVzjfE2tXEZtvphDyUNqgpKGhy9wG2Pdbw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=amd.com;
  • Cc: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>, "Huang, Ray" <Ray.Huang@xxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, "Orzel, Michal" <Michal.Orzel@xxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Julien Grall <julien@xxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, "Chen, Jiqian" <Jiqian.Chen@xxxxxxx>
  • Delivery-date: Fri, 25 Jul 2025 08:22:29 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Thread-index: AQHb/F7fplPhQ29B9EqC0EGcIo+cSrRBVZQAgAFbywD//8xpgIAAiZ4A
  • Thread-topic: [PATCH v8 2/8] vpci: Refactor REGISTER_VPCI_INIT

On 2025/7/25 16:08, Roger Pau Monné wrote:
> On Fri, Jul 25, 2025 at 03:15:13AM +0000, Chen, Jiqian wrote:
>> On 2025/7/24 22:28, Roger Pau Monné wrote:
>>> On Thu, Jul 24, 2025 at 01:50:00PM +0800, Jiqian Chen wrote:
>>>> diff --git a/xen/drivers/vpci/msix.c b/xen/drivers/vpci/msix.c
>>>> index 74211301ba10..eb3e7dcd1068 100644
>>>> --- a/xen/drivers/vpci/msix.c
>>>> +++ b/xen/drivers/vpci/msix.c
>>>> @@ -703,9 +703,18 @@ static int cf_check init_msix(struct pci_dev *pdev)
>>>>      pdev->vpci->msix = msix;
>>>>      list_add(&msix->next, &d->arch.hvm.msix_tables);
>>>>  
>>>> -    return 0;
>>>> +    /*
>>>> +     * vPCI header initialization will have mapped the whole BAR into the
>>>> +     * p2m, as MSI-X capability was not yet initialized.  Crave a hole for
>>>> +     * the MSI-X table here, so that Xen can trap accesses.
>>>> +     */
>>>> +    spin_lock(&pdev->vpci->lock);
>>>> +    rc = vpci_make_msix_hole(pdev);
>>>> +    spin_unlock(&pdev->vpci->lock);
>>>
>>> I should have asked in the last version, but why do you take the vPCI
>>> lock here?
>>>
>>> The path that ASSERTs the lock is held should never be taken when
>>> called from init_msix().  Is there some path I'm missing in
>>> vpci_make_msix_hole() that requires the vCPI lock to be held?
>>>
>>> The rest LGTM.
>> Sorry to forget to delete this.
>> Feel free to change when submit.
>> Or I will change by sending a new version.
> 
> Can you ensure it also works without the locking?  I think so, but I
> haven't tested myself.
Yes, before I replied to you last email.
I have tested locally. MSI-X and other things work fine.

> 
> Thanks, Roger.

-- 
Best regards,
Jiqian Chen.

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.