xen-devel
[Xen-devel] [patch 12/21] Xen-paravirt: Allocate and free vmalloc areas
To: |
Andi Kleen <ak@xxxxxx> |
Subject: |
[Xen-devel] [patch 12/21] Xen-paravirt: Allocate and free vmalloc areas |
From: |
Jeremy Fitzhardinge <jeremy@xxxxxxxx> |
Date: |
Thu, 15 Feb 2007 18:25:01 -0800 |
Cc: |
Zachary Amsden <zach@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, Ian Pratt <ian.pratt@xxxxxxxxxxxxx>, linux-kernel@xxxxxxxxxxxxxxx, Jan Beulich <JBeulich@xxxxxxxxxx>, Chris Wright <chrisw@xxxxxxxxxxxx>, virtualization@xxxxxxxxxxxxxx, Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx>, Christian Limpach <Christian.Limpach@xxxxxxxxxxxx> |
Delivery-date: |
Thu, 15 Feb 2007 18:30:00 -0800 |
Envelope-to: |
www-data@xxxxxxxxxxxxxxxxxx |
List-help: |
<mailto:xen-devel-request@lists.xensource.com?subject=help> |
List-id: |
Xen developer discussion <xen-devel.lists.xensource.com> |
List-post: |
<mailto:xen-devel@lists.xensource.com> |
List-subscribe: |
<http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe> |
List-unsubscribe: |
<http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe> |
References: |
<20070216022449.739760547@xxxxxxxx> |
Sender: |
xen-devel-bounces@xxxxxxxxxxxxxxxxxxx |
User-agent: |
quilt/0.46-1 |
Allocate/destroy a 'vmalloc' VM area: alloc_vm_area and free_vm_area
The alloc function ensures that page tables are constructed for the
region of kernel virtual address space and mapped into init_mm.
Lock an area so that PTEs are accessible in the current address space:
lock_vm_area and unlock_vm_area. The lock function prevents context
switches to a lazy mm that doesn't have the area mapped into its page
tables. It also ensures that the page tables are mapped into the
current mm by causing the page fault handler to copy the page
directory pointers from init_mm into the current mm.
These functions are not particularly Xen-specific, so they're put into
mm/vmalloc.c.
Signed-off-by: Ian Pratt <ian.pratt@xxxxxxxxxxxxx>
Signed-off-by: Christian Limpach <Christian.Limpach@xxxxxxxxxxxx>
Signed-off-by: Chris Wright <chrisw@xxxxxxxxxxxx>
Signed-off-by: Jeremy Fitzhardinge <jeremy@xxxxxxxxxxxxx>
Cc: "Jan Beulich" <JBeulich@xxxxxxxxxx>
--
include/linux/vmalloc.h | 8 +++++
mm/vmalloc.c | 62 ++++++++++++++++++++++++++++++++++++++++++++++
2 files changed, 70 insertions(+)
===================================================================
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -68,6 +68,14 @@ extern int map_vm_area(struct vm_struct
struct page ***pages);
extern void unmap_vm_area(struct vm_struct *area);
+/* Allocate/destroy a 'vmalloc' VM area. */
+extern struct vm_struct *alloc_vm_area(unsigned long size);
+extern void free_vm_area(struct vm_struct *area);
+
+/* Lock an area so that PTEs are accessible in the current address space. */
+extern void lock_vm_area(struct vm_struct *area);
+extern void unlock_vm_area(struct vm_struct *area);
+
/*
* Internals. Dont't use..
*/
===================================================================
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -747,3 +747,65 @@ out_einval_locked:
}
EXPORT_SYMBOL(remap_vmalloc_range);
+static int f(pte_t *pte, struct page *pmd_page, unsigned long addr, void *data)
+{
+ /* apply_to_page_range() does all the hard work. */
+ return 0;
+}
+
+struct vm_struct *alloc_vm_area(unsigned long size)
+{
+ struct vm_struct *area;
+
+ area = get_vm_area(size, VM_IOREMAP);
+ if (area == NULL)
+ return NULL;
+
+ /*
+ * This ensures that page tables are constructed for this region
+ * of kernel virtual address space and mapped into init_mm.
+ */
+ if (apply_to_page_range(&init_mm, (unsigned long)area->addr,
+ area->size, f, NULL)) {
+ free_vm_area(area);
+ return NULL;
+ }
+
+ return area;
+}
+EXPORT_SYMBOL_GPL(alloc_vm_area);
+
+void free_vm_area(struct vm_struct *area)
+{
+ struct vm_struct *ret;
+ ret = remove_vm_area(area->addr);
+ BUG_ON(ret != area);
+ kfree(area);
+}
+EXPORT_SYMBOL_GPL(free_vm_area);
+
+void lock_vm_area(struct vm_struct *area)
+{
+ unsigned long i;
+ char c;
+
+ /*
+ * Prevent context switch to a lazy mm that doesn't have this area
+ * mapped into its page tables.
+ */
+ preempt_disable();
+
+ /*
+ * Ensure that the page tables are mapped into the current mm. The
+ * page-fault path will copy the page directory pointers from init_mm.
+ */
+ for (i = 0; i < area->size; i += PAGE_SIZE)
+ (void)__get_user(c, (char __user *)area->addr + i);
+}
+EXPORT_SYMBOL_GPL(lock_vm_area);
+
+void unlock_vm_area(struct vm_struct *area)
+{
+ preempt_enable();
+}
+EXPORT_SYMBOL_GPL(unlock_vm_area);
--
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- [Xen-devel] Re: [patch 16/21] Xen-paravirt: Add code into head.S to handle being booted by Xen, (continued)
- [Xen-devel] [patch 19/21] Xen-paravirt: Add the Xenbus sysfs and virtual device hotplug driver., Jeremy Fitzhardinge
- [Xen-devel] [patch 00/21] Xen-paravirt: Xen guest implementation for paravirt_ops interface, Jeremy Fitzhardinge
- [Xen-devel] [patch 02/21] Xen-paravirt: ignore vgacon if hardware not present, Jeremy Fitzhardinge
- [Xen-devel] [patch 03/21] Xen-paravirt: Add pagetable accessors to pack and unpack pagetable entries, Jeremy Fitzhardinge
- [Xen-devel] [patch 01/21] Xen-paravirt: Fix typo in sync_constant_test_bit()s name., Jeremy Fitzhardinge
- [Xen-devel] [patch 09/21] Xen-paravirt: Allow paravirt backend to select PGD allocation alignment, Jeremy Fitzhardinge
- [Xen-devel] [patch 12/21] Xen-paravirt: Allocate and free vmalloc areas,
Jeremy Fitzhardinge <=
- [Xen-devel] Re: [patch 12/21] Xen-paravirt: Allocate and free vmalloc areas, Jeremy Fitzhardinge
- Re: [Xen-devel] Re: [patch 12/21] Xen-paravirt: Allocate and free vmalloc areas, Keir Fraser
- Re: [Xen-devel] Re: [patch 12/21] Xen-paravirt: Allocate and free vmalloc areas, Keir Fraser
|
|
|