|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] [PATCH 10 of 24] libxc: infrastructure for hypercall saf
On Tue, 2010-09-07 at 09:44 +0100, Jeremy Fitzhardinge wrote:
> How does this end up making the memory suitable for passing to Xen?
> Where does it get locked down in the non-__sun__ case? And why just
> __sun__ here?
As described in patch 0/24 the series still uses the same mlock
mechanism as before to actually obtain "suitable" memory. The __sun__
stuff is the same as before too -- this part was ported direct from the
existing bounce implementation in xc_private.c.
This series only:
* ensures that everywhere which should be using special hypercall
memory is actually using the correct (or any!) interface to
obtain it. Not everywhere was, sometimes by omission but more
often because the current implementation will only bounce one
buffer at a time and just locks any subsequent nested to bounce
attempts in place. The currently implementation also only
bounces buffers smaller than 1 page and just locks anything else
in place.
* ensures that each buffer is only locked once -- some callchains
were (un)locking the same buffer multiple times going down/up
the stack (particularly concerning for buffers which are reused)
* removes the use of mlock on portions of the stack, which is
considered more dubious than using mlock in general.
* makes it easier to switch to a better mechanism than mlock in
the future (i.e. phase 2) by consolidating the magic allocations
into one place.
> Is there any way to make memory hypercall-safe with existing syscalls,
> or does/will it end up copying from this memory into the kernel before
> issuing the hypercall? Or adding some other mechanism for pinning
> down the pages?
It's not clear what phase 2 actually is (although phase 3 is clearly
profit), I don't think any existing syscalls do what we need. mlock
(avoiding the stack) gets pretty close and so far the issues with mlock
seem to have been more potential than hurting us in practice, but it
pays to be prepared e.g. for more aggressive page migration/coalescing
in the future, I think.
It's not possible to copy the necessary buffers in the kernel without
adding deep introspection of the necessary hypercalls to the kernel
itself, I think we want to avoid this if possible.
Also some of the buffers can be quite large and/or potentially
performance sensitive so we would like to retain the ability to allocate
the correct sort of memory in userspace from the get go and therefor
avoid bouncing at all.
I was thinking we might need to implement some sort of special anonymous
mmap on the privcmd device or an ioctl or something along those lines,
but I'm open to better suggestions.
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|