WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH 5/6] xen-gntalloc: Userspace grant allocation dri

To: Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH 5/6] xen-gntalloc: Userspace grant allocation driver
From: Jeremy Fitzhardinge <jeremy@xxxxxxxx>
Date: Wed, 15 Dec 2010 17:05:16 -0800
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, Ian.Campbell@xxxxxxxxxx
Delivery-date: Wed, 15 Dec 2010 17:06:11 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4D08CE4E.2050505@xxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <1292338553-20575-1-git-send-email-dgdegra@xxxxxxxxxxxxx> <1292338553-20575-6-git-send-email-dgdegra@xxxxxxxxxxxxx> <4D07E4B9.1080401@xxxxxxxx> <4D07EA5C.8050605@xxxxxxxxxxxxx> <4D07F266.9000008@xxxxxxxx> <4D08CE4E.2050505@xxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.2.13) Gecko/20101209 Fedora/3.1.7-0.35.b3pre.fc14 Lightning/1.0b3pre Thunderbird/3.1.7
On 12/15/2010 06:18 AM, Daniel De Graaf wrote:
> On 12/14/2010 05:40 PM, Jeremy Fitzhardinge wrote:
>> On 12/14/2010 02:06 PM, Daniel De Graaf wrote:
>>>>> +static int gntalloc_mmap(struct file *filp, struct vm_area_struct *vma)
>>>>> +{
>>>>> + struct gntalloc_file_private_data *priv = filp->private_data;
>>>>> + struct gntalloc_gref *gref;
>>>>> +
>>>>> + if (debug)
>>>>> +         printk("%s: priv %p, page %lu\n", __func__,
>>>>> +                priv, vma->vm_pgoff);
>>>>> +
>>>>> + /*
>>>>> +  * There is a 1-to-1 correspondence of grant references to shared
>>>>> +  * pages, so it only makes sense to map exactly one page per
>>>>> +  * call to mmap().
>>>>> +  */
>>>> Single-page mmap makes sense if the only possible use-cases are for
>>>> single-page mappings, but if you're talking about framebuffers and the
>>>> like is seems like a very awkward way to use mmap.  It would be cleaner
>>>> from an API perspective to have a user-mode defined flat address space
>>>> indexed by pgoff which maps to an array of grefs, so you can sensibly do
>>>> a multi-page mapping.
>>>>
>>>> It would also allow you to hide the grefs from usermode entirely.  Then
>>>> its just up to usermode to choose suitable file offsets for itself.
>>> I considered this, but wanted to keep userspace compatability with the
>>> previously created interface.
>> Is that private to you, or something in broader use?
> This module was used as part of Qubes (http://www.qubes-os.org). The device
> path has changed (/dev/gntalloc to /dev/xen/gntalloc), and the API change
> adds useful functionality, so I don't think we must keep compatibility. This
> will also allow cleaning up the interface to remove parameters that make no
> sense (owner_domid, for example).

Ah, right.  Well that means it has at least been prototyped, but I don't
think we should be constrained by the original ABI if we can make clear
improvements.

>>>  If there's no reason to avoid doing so, I'll
>>> change the ioctl interface to allocate an array of grants and calculate the
>>> offset similar to how gntdev does currently (picks a suitable open slot).
>> I guess there's three options: you could get the kernel to allocate
>> extents, make usermode do it, or have one fd per extent and always start
>> from offset 0.  I guess the last could get very messy if you want to
>> have lots of mappings...  Making usermode define the offsets seems
>> simplest and most flexible, because then they can stitch together the
>> file-offset space in any way that's convenient to them (you just need to
>> deal with overlaps in that space).
> Would it be useful to also give userspace control over the offsets in gntdev?
>
> One argument for doing it in the kernel is to avoid needing to track what
> offsets are already being used (and then having the kernel re-check that).

Hm, yeah, that could be a bit fiddly.  I guess you'd need to stick them
into an rbtree or something.

> While this isn't hard, IOCTL_GNTDEV_GET_OFFSET_FOR_VADDR only exists in
> order to relieve userspace of the need to track its mappings, so this
> seems to have been a concern before.

It would be nice to have them symmetric.  However, its easy to implement
GET_OFFSET_FOR_VADDR either way - given a vaddr, you can look up the vma
and return its pgoff.

It looks like GET_OFFSET_FOR_VADDR is just used in xc_gnttab_munmap() so
that libxc can recover the offset and the page count from the vaddr, so
that it can pass them to IOCTL_GNTDEV_UNMAP_GRANT_REF.

Also, it seems to fail unmaps which don't exactly correspond to a
MAP_GRANT_REF.  I guess that's OK, but it looks a bit strange.

> Another use case of gntalloc that may prove useful is to have more than
> one application able to map the same grant within the kernel.

So you mean have gntalloc allocate one page and the allow multiple
processes to map and use it?  In that case it would probably be best
implemented as a filesystem, so you can give proper globally visible
names to the granted regions, and mmap them as normal files, like shm.

> Agreed; once mapped, the frame numbers (GFN & MFN) won't change until
> they are unmapped, so pre-populating them will be better.

Unless of course you don't want to map the pages in dom0 at all; if you
just want dom0 to be a facilitator for shared pages between two other
domains.  Does Xen allow a page to be granted to more than one domain at
once?

    J

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>