[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[PATCH] Xen/gnttab: improve error handling in gnttab_dma_{alloc,free}_pages()


  • To: "xen-devel@xxxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Mon, 16 Feb 2026 09:55:07 +0100
  • Autocrypt: addr=jbeulich@xxxxxxxx; keydata= xsDiBFk3nEQRBADAEaSw6zC/EJkiwGPXbWtPxl2xCdSoeepS07jW8UgcHNurfHvUzogEq5xk hu507c3BarVjyWCJOylMNR98Yd8VqD9UfmX0Hb8/BrA+Hl6/DB/eqGptrf4BSRwcZQM32aZK 7Pj2XbGWIUrZrd70x1eAP9QE3P79Y2oLrsCgbZJfEwCgvz9JjGmQqQkRiTVzlZVCJYcyGGsD /0tbFCzD2h20ahe8rC1gbb3K3qk+LpBtvjBu1RY9drYk0NymiGbJWZgab6t1jM7sk2vuf0Py O9Hf9XBmK0uE9IgMaiCpc32XV9oASz6UJebwkX+zF2jG5I1BfnO9g7KlotcA/v5ClMjgo6Gl MDY4HxoSRu3i1cqqSDtVlt+AOVBJBACrZcnHAUSuCXBPy0jOlBhxPqRWv6ND4c9PH1xjQ3NP nxJuMBS8rnNg22uyfAgmBKNLpLgAGVRMZGaGoJObGf72s6TeIqKJo/LtggAS9qAUiuKVnygo 3wjfkS9A3DRO+SpU7JqWdsveeIQyeyEJ/8PTowmSQLakF+3fote9ybzd880fSmFuIEJldWxp Y2ggPGpiZXVsaWNoQHN1c2UuY29tPsJgBBMRAgAgBQJZN5xEAhsDBgsJCAcDAgQVAggDBBYC AwECHgECF4AACgkQoDSui/t3IH4J+wCfQ5jHdEjCRHj23O/5ttg9r9OIruwAn3103WUITZee e7Sbg12UgcQ5lv7SzsFNBFk3nEQQCACCuTjCjFOUdi5Nm244F+78kLghRcin/awv+IrTcIWF hUpSs1Y91iQQ7KItirz5uwCPlwejSJDQJLIS+QtJHaXDXeV6NI0Uef1hP20+y8qydDiVkv6l IreXjTb7DvksRgJNvCkWtYnlS3mYvQ9NzS9PhyALWbXnH6sIJd2O9lKS1Mrfq+y0IXCP10eS FFGg+Av3IQeFatkJAyju0PPthyTqxSI4lZYuJVPknzgaeuJv/2NccrPvmeDg6Coe7ZIeQ8Yj t0ARxu2xytAkkLCel1Lz1WLmwLstV30g80nkgZf/wr+/BXJW/oIvRlonUkxv+IbBM3dX2OV8 AmRv1ySWPTP7AAMFB/9PQK/VtlNUJvg8GXj9ootzrteGfVZVVT4XBJkfwBcpC/XcPzldjv+3 HYudvpdNK3lLujXeA5fLOH+Z/G9WBc5pFVSMocI71I8bT8lIAzreg0WvkWg5V2WZsUMlnDL9 mpwIGFhlbM3gfDMs7MPMu8YQRFVdUvtSpaAs8OFfGQ0ia3LGZcjA6Ik2+xcqscEJzNH+qh8V m5jjp28yZgaqTaRbg3M/+MTbMpicpZuqF4rnB0AQD12/3BNWDR6bmh+EkYSMcEIpQmBM51qM EKYTQGybRCjpnKHGOxG0rfFY1085mBDZCH5Kx0cl0HVJuQKC+dV2ZY5AqjcKwAxpE75MLFkr wkkEGBECAAkFAlk3nEQCGwwACgkQoDSui/t3IH7nnwCfcJWUDUFKdCsBH/E5d+0ZnMQi+G0A nAuWpQkjM1ASeQwSHEeAWPgskBQL
  • Cc: Juergen Gross <jgross@xxxxxxxx>, Stefano Stabellini <sstabellini@xxxxxxxxxx>, Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>, Oleksandr Andrushchenko <oleksandr_andrushchenko@xxxxxxxx>
  • Delivery-date: Mon, 16 Feb 2026 08:55:12 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

Both decrease-reservation and populate-physmap can succeed partially. For
the former this may be pretty unlikely, but clearly the latter can hit
both out-of-memory conditions in the hypervisor or denial because of the
allocation exceeding the domain's allowance (there's no interaction with
the balloon driver here either).

In gnttab_dma_free_pages() we simply can't give back to the system what
hasn't been re-filled with backing memory. For gnttab_dma_alloc_pages()
both parts of the overall region need dealing with differently.

While no present caller of gnttab_dma_free_pages() checks its return
value, also don't use -EFAULT there: It really is an out-of-memory
condition. (In gnttab_dma_alloc_pages() -EFAULT doesn't look quite right
either, but I couldn't think of a clearly better error code there.)

Fixes: 9bdc7304f536 ("xen/grant-table: Allow allocating buffers suitable for 
DMA")
Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
---
I'm likely screwing things up, as I can't understand how this is intended
to work: args->dev_bus_addr shouldn't really be in pseudo-physical address
space, or else the address isn't suitable for handing to a device in order
to DMA to/from it. Hence using __phys_to_pfn() on it to then hand the
result to pfn_to_page() feels bogus. Was all of this perhaps only ever
intended for (tested with) translated domains? With the uses of
xenmem_reservation_va_mapping_*() only being masquerade?

Furthermore the allocated buffer ought to have been contiguous in
machine / DMA space. Yet the way it's re-populated upon freeing of the
area doesn't guarantee that at all.

All of what is done assumes dma_free_{coherent,wc,attr}() is capable of
freeing piecemeal. I only checked dma_release_from_dev_coherent() to
fulfill this requirement. If this can't be relied upon, please consider
this submission as merely a bug report (for someone else to fix).

--- a/drivers/xen/grant-table.c
+++ b/drivers/xen/grant-table.c
@@ -1095,6 +1095,28 @@ int gnttab_dma_alloc_pages(struct gnttab
        ret = xenmem_reservation_decrease(args->nr_pages, args->frames);
        if (ret != args->nr_pages) {
                pr_debug("Failed to decrease reservation for DMA buffer\n");
+               if (ret >= 0) {
+                       /* Free the part where decrease didn't work. */
+                       size_t done = ret << PAGE_SHIFT;
+
+                       xenmem_reservation_va_mapping_update(args->nr_pages - 
ret,
+                                                            &args->pages[ret],
+                                                            
&args->frames[ret]);
+
+                       if (args->coherent)
+                               dma_free_coherent(args->dev, size - done,
+                                                 args->vaddr + done,
+                                                 args->dev_bus_addr + done);
+                       else
+                               dma_free_wc(args->dev, size - done,
+                                           args->vaddr + done,
+                                           args->dev_bus_addr + done);
+
+                       if (!ret) /* Nothing else to do? */
+                               return -EFAULT;
+
+                       args->nr_pages = ret;
+               }
                ret = -EFAULT;
                goto fail;
        }
@@ -1128,7 +1150,12 @@ int gnttab_dma_free_pages(struct gnttab_
        ret = xenmem_reservation_increase(args->nr_pages, args->frames);
        if (ret != args->nr_pages) {
                pr_debug("Failed to increase reservation for DMA buffer\n");
-               ret = -EFAULT;
+               /* Can only free what has been re-filled. */
+               if (!ret)
+                       return -ENOMEM;
+               if (ret > 0)
+                       args->nr_pages = ret;
+               ret = -ENOMEM;
        } else {
                ret = 0;
        }



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.