WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH 5/6] xen-gntalloc: Userspace grant allocation dri

To: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH 5/6] xen-gntalloc: Userspace grant allocation driver
From: Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>
Date: Wed, 09 Feb 2011 13:52:56 -0500
Cc: jeremy@xxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, Ian.Campbell@xxxxxxxxxx
Delivery-date: Wed, 09 Feb 2011 10:53:49 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20110208224804.GB8321@xxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: National Security Agency
References: <1296753544-13323-1-git-send-email-dgdegra@xxxxxxxxxxxxx> <1296753544-13323-6-git-send-email-dgdegra@xxxxxxxxxxxxx> <20110208224804.GB8321@xxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.13) Gecko/20101209 Fedora/3.1.7-0.35.b3pre.fc13 Thunderbird/3.1.7
On 02/08/2011 05:48 PM, Konrad Rzeszutek Wilk wrote:
> On Thu, Feb 03, 2011 at 12:19:03PM -0500, Daniel De Graaf wrote:
>> This allows a userspace application to allocate a shared page for
>> implementing inter-domain communication or device drivers. These
>> shared pages can be mapped using the gntdev device or by the kernel
>> in another domain.
>>
>> Signed-off-by: Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>
>> ---
>>  drivers/xen/Kconfig    |    8 +
>>  drivers/xen/Makefile   |    2 +
>>  drivers/xen/gntalloc.c |  486 
>> ++++++++++++++++++++++++++++++++++++++++++++++++
>>  include/xen/gntalloc.h |   50 +++++
>>  4 files changed, 546 insertions(+), 0 deletions(-)
>>  create mode 100644 drivers/xen/gntalloc.c
>>  create mode 100644 include/xen/gntalloc.h
>>

[snip]

>> +static int add_grefs(struct ioctl_gntalloc_alloc_gref *op,
>> +    uint32_t *gref_ids, struct gntalloc_file_private_data *priv)
>> +{
>> +    int i, rc, readonly;
>> +    LIST_HEAD(queue_gref);
>> +    LIST_HEAD(queue_file);
>> +    struct gntalloc_gref *gref;
>> +
>> +    readonly = !(op->flags & GNTALLOC_FLAG_WRITABLE);
>> +    rc = -ENOMEM;
>> +    for (i = 0; i < op->count; i++) {
>> +            gref = kzalloc(sizeof(*gref), GFP_KERNEL);
>> +            if (!gref)
>> +                    goto undo;
>> +            list_add_tail(&gref->next_gref, &queue_gref);
>> +            list_add_tail(&gref->next_file, &queue_file);
>> +            gref->users = 1;
>> +            gref->file_index = op->index + i * PAGE_SIZE;
>> +            gref->page = alloc_page(GFP_KERNEL|__GFP_ZERO);
>> +            if (!gref->page)
>> +                    goto undo;
>> +
>> +            /* Grant foreign access to the page. */
>> +            gref->gref_id = gnttab_grant_foreign_access(op->domid,
>> +                    pfn_to_mfn(page_to_pfn(gref->page)), readonly);
>> +            if (gref->gref_id < 0) {
>> +                    rc = gref->gref_id;
>> +                    goto undo;
>> +            }
>> +            gref_ids[i] = gref->gref_id;
>> +    }
>> +
>> +    /* Add to gref lists. */
>> +    spin_lock(&gref_lock);
>> +    list_splice_tail(&queue_gref, &gref_list);
>> +    list_splice_tail(&queue_file, &priv->list);
>> +    spin_unlock(&gref_lock);
>> +
>> +    return 0;
>> +
>> +undo:
>> +    spin_lock(&gref_lock);
>> +    gref_size -= (op->count - i);
> 
> So we decrease the gref_size by the count of the ones that we 
> allocated..

No, (op->count - i) is the number that we haven't yet allocated. Those
aren't in queue_file, so __del_gref is never called on them.

>> +
>> +    list_for_each_entry(gref, &queue_file, next_file) {
>> +            /* __del_gref does not remove from queue_file */
>> +            __del_gref(gref);
> 
> .. but __del_gref decreases the gref_size by one, so wouldn't
> we decrease by too much?

[snip]

>> +static long gntalloc_ioctl_alloc(struct gntalloc_file_private_data *priv,
>> +            struct ioctl_gntalloc_alloc_gref __user *arg)
>> +{
>> +    int rc = 0;
>> +    struct ioctl_gntalloc_alloc_gref op;
>> +    uint32_t *gref_ids;
>> +
>> +    pr_debug("%s: priv %p\n", __func__, priv);
>> +
>> +    if (copy_from_user(&op, arg, sizeof(op))) {
>> +            rc = -EFAULT;
>> +            goto out;
>> +    }
>> +
>> +    gref_ids = kzalloc(sizeof(gref_ids[0]) * op.count, GFP_TEMPORARY);
>> +    if (!gref_ids) {
>> +            rc = -ENOMEM;
>> +            goto out;
>> +    }
>> +
>> +    spin_lock(&gref_lock);
>> +    /* Clean up pages that were at zero (local) users but were still mapped
>> +     * by remote domains. Since those pages count towards the limit that we
>> +     * are about to enforce, removing them here is a good idea.
>> +     */
>> +    do_cleanup();
>> +    if (gref_size + op.count > limit) {
>> +            spin_unlock(&gref_lock);
>> +            rc = -ENOSPC;
>> +            goto out_free;
>> +    }
>> +    gref_size += op.count;
>> +    op.index = priv->index;
>> +    priv->index += op.count * PAGE_SIZE;
>> +    spin_unlock(&gref_lock);
>> +
>> +    rc = add_grefs(&op, gref_ids, priv);
>> +    if (rc < 0)
>> +            goto out_free;
> 
> Should we cleanup up priv->index to its earlier value?

We could, but this could also introduce a possible race if two threads were
running the ioctl at the same time. There's no harm in letting the index 
increase,
since it is 64-bit so doesn't have overflow issues.

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>