Jeremy Fitzhardinge wrote:
Keir Fraser wrote:
I'm not really familiar with the pv_ops code I'm afraid. But thinking
about
this some more I've realised there's no way really to avoid making the
early-unpin logic aware of gntdev mappings. This is because if we do
pin pte
pages, and require them to remain pinned across early-unpin, then
pgd_unpin() must not attempt to make those pte pages writable. That will
fail, because the pages are still pinned! You'd either need to handle the
failure to make the page writable, or have a per-page flag to indicate
which
pte pages contain gntdev mappings. Frankly you may as well stick with the
per-mm-context has_foreign_mappings flag.
So the issue is that a pte page containing a _PAGE_IO pte must remain
pinned while it contains that mapping? Would shooting down the mapping
allow it to be unpinned, or does that need to be deferred until some
later point (if so, when?)?
I guess the downside is that we'd need to scan the pte looking for
_PAGE_IO mappings, which is a bit of a pain. Skipping that would mean
hiding a flag somewhere...
Is it a pain to add a pv_ops-subtype-specific flag to mm_context? If
so you
could maintain a set datastructure instead, indicating which mm_contexts
contain foreign mappings.
So, in 2.6.18-xen mm->has_foreign_mapping makes it skip early-unpin, but
puts it off until pgd_free(). Presumably that works because all the
vma's all been unmapped by then...
The following patch was sufficient for me. I delayed the arch_exit_mmap (which
eventually calls into xen) until after unmap_vmas is called, which calls zap_pte
(where I unmap the grant). Presumably, there is a performance overhead to
always doing this delay, and hence 2.6.18 only did the delay if
has_foreign_mappings is set. For macrobenchmarks like compilation, I couldn't
find a difference.
Cheers,
Mike
diff --git a/mm/mmap.c b/mm/mmap.c
index a32d28c..c118b54 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -2036,15 +2036,14 @@ void exit_mmap(struct mm_struct *mm)
unsigned long nr_accounted = 0;
unsigned long end;
- /* mm's last user has gone, and its about to be pulled down */
- arch_exit_mmap(mm);
-
lru_add_drain();
flush_cache_mm(mm);
tlb = tlb_gather_mmu(mm, 1);
/* Don't update_hiwater_rss(mm) here, do_exit already did */
/* Use -1 here to ensure all VMAs in the mm are unmapped */
end = unmap_vmas(&tlb, vma, 0, -1, &nr_accounted, NULL);
+ /* mm's last user has gone, and its about to be pulled down */
+ arch_exit_mmap(mm);
vm_unacct_memory(nr_accounted);
free_pgtables(&tlb, vma, FIRST_USER_ADDRESS, 0);
tlb_finish_mmu(tlb, 0, end);
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|