|
|
|
|
|
|
|
|
|
|
xen-devel
[Xen-devel] [PATCH 00 of 25] libxc: Hypercall buffers
libxc currently locks various on-stack data structures present on the
stack using mlock(2) in order to try and make them safe for passing to
hypercalls (which requires the memory to be mapped)
There are several issues with this approach:
1) mlock/munlock do not nest, therefore mlocking multiple pieces of
data on the stack which happen to share a page causes everything to
be unlocked on the first munlock not the last. This is likely to be
currently OK for the uses in libxc taken in isolation but could
impact any caller of libxc which uses mlock itself.
2) mlocking only parts of the stack is considered by many to be a
dubious, if strictly speaking allowed by the relevant
specifications, use of mlock.
3) mlock may not provide the required semantics needed for hypercall
safe memory. mlock simply ensures that there can be no major
faults (page faults requiring I/O to satisfy) but does not
necessarily rule out minor faults (e.g. due to page migration)
The following introduces an explicit hypercall-safe memory pool API
which includes support for bouncing user-supplied memory buffers into
suitable memory.
This series addresses (1) and (2) but does not directly address (3)
other than by encapsulating the code which acquires hypercall safe
memory into one place where it can be more easily fixed.
There is also the slightly separate issue of code which forgets to
lock buffers as necessary and therefor this series overrides the Xen
guest-handle interfaces to attempt to improve compile-time checking
for the correct use of the memory pool. This scheme works for the
pointers contained within hypercall argument structures but doesn't
catch the actual hypercall arguments themselves. I'm open to
suggestions on how to extend it cleanly to catch those cases.
The bits which touch ia64 are not even compile tested since I do not
have access to a suitable userspace-capable cross compiler.
Changes since last time:
- rebased on top of recent cpupool changes, conflicts in
xc_cpupool_getinfo and xc_cpupool_freeinfo.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- [Xen-devel] [PATCH 18 of 25] libxc: switch page offlining interfaces to hypercall buffers, (continued)
- [Xen-devel] [PATCH 18 of 25] libxc: switch page offlining interfaces to hypercall buffers, Ian Campbell
- [Xen-devel] [PATCH 17 of 25] libxc: convert mmuext op interface over to hypercall buffers, Ian Campbell
- [Xen-devel] [PATCH 19 of 25] libxc: convert ia64 dom0vp interface to hypercall buffers, Ian Campbell
- [Xen-devel] [PATCH 21 of 25] python xc: use hypercall buffer interface, Ian Campbell
- [Xen-devel] [PATCH 20 of 25] python acm: use hypercall buffer interface, Ian Campbell
- [Xen-devel] [PATCH 23 of 25] secpol: use hypercall buffers, Ian Campbell
- [Xen-devel] [PATCH 24 of 25] libxc: do not align/lock buffers which do not need it, Ian Campbell
- [Xen-devel] [PATCH 25 of 25] libxc: finalise transition to hypercall buffers, Ian Campbell
- [Xen-devel] [PATCH 22 of 25] xenpm: use hypercall buffers, Ian Campbell
- [Xen-devel] Re: [PATCH 00 of 25] libxc: Hypercall buffers, Ian Campbell
- [Xen-devel] [PATCH 00 of 25] libxc: Hypercall buffers,
Ian Campbell <=
|
|
|
|
|