WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] [PATCH 00 of 25] libxc: Hypercall buffers

To: xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] [PATCH 00 of 25] libxc: Hypercall buffers
From: Ian Campbell <ian.campbell@xxxxxxxxxx>
Date: Fri, 22 Oct 2010 15:15:42 +0100
Cc: Ian Campbell <ian.campbell@xxxxxxxxxx>
Delivery-date: Fri, 22 Oct 2010 07:21:30 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
libxc currently locks various on-stack data structures present on the
stack using mlock(2) in order to try and make them safe for passing to
hypercalls (which requires the memory to be mapped)

There are several issues with this approach:

1) mlock/munlock do not nest, therefore mlocking multiple pieces of
   data on the stack which happen to share a page causes everything to
   be unlocked on the first munlock not the last. This is likely to be
   currently OK for the uses in libxc taken in isolation but could
   impact any caller of libxc which uses mlock itself.
2) mlocking only parts of the stack is considered by many to be a
   dubious, if strictly speaking allowed by the relevant
   specifications, use of mlock.
3) mlock may not provide the required semantics needed for hypercall
   safe memory. mlock simply ensures that there can be no major
   faults (page faults requiring I/O to satisfy) but does not
   necessarily rule out minor faults (e.g. due to page migration)

The following introduces an explicit hypercall-safe memory pool API
which includes support for bouncing user-supplied memory buffers into
suitable memory.

This series addresses (1) and (2) but does not directly address (3)
other than by encapsulating the code which acquires hypercall safe
memory into one place where it can be more easily fixed.

There is also the slightly separate issue of code which forgets to
lock buffers as necessary and therefor this series overrides the Xen
guest-handle interfaces to attempt to improve compile-time checking
for the correct use of the memory pool. This scheme works for the
pointers contained within hypercall argument structures but doesn't
catch the actual hypercall arguments themselves. I'm open to
suggestions on how to extend it cleanly to catch those cases.

The bits which touch ia64 are not even compile tested since I do not
have access to a suitable userspace-capable cross compiler.

Changes since last time:
  - rebased on top of recent cpupool changes, conflicts in
    xc_cpupool_getinfo and xc_cpupool_freeinfo.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>