[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH 0/2] Userspace grant communication



For fast communication between userspace applications in different domains,
it is useful to be able to set up a shared memory page. This can be used to
implement device driver frontends and backends completely in userspace, or
as a faster alternative to network communication. The current gntdev is
limited to PV domains, and does not allow grants to be created. The following
patches change gntdev to remapping existing pages, allowing the same code
to be used in PV and HVM, and add a gntalloc driver to allow mappings to be
created by userspace. These changes also make the mappings more application-
friendly: the mmap() calls can be made multiple times, persist across fork(),
and allow the device to be closed without invalidating the mapped areas. This
matches the behavior of mmap() on a normal file.

API changes from the existing /dev/xen/gntdev:

The unused "pad" field in ioctl_gntdev_map_grant_ref is now used for flags
on the mapping (currently used to specify if the mapping should be writable).
This provides sufficient information to perform the mapping when the ioctl is
called. To retain compatibility with current userspace, a new ioctl number is
used for this functionality and the legacy error on first mapping is retained
when the old ioctl is used.

IOCTL_GNTDEV_SET_MAX_GRANTS is not exposed in the Xen userspace libraries,
and is not very useful: it cannot be used to raise the limit of grants per
file descriptor, and is trivial to bypass by opening the device multiple
times. This version uses a global limit specified as a module parameter
(modifiable at runtime via sysfs).

-- 
Daniel De Graaf
National Security Agency

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.