|
|
|
|
|
|
|
|
|
|
xen-changelog
[Xen-changelog] [xen-unstable] x86/mm/shadow: adjust early-unshadow heur
# HG changeset patch
# User Tim Deegan <Tim.Deegan@xxxxxxxxxx>
# Date 1308572174 -3600
# Node ID c91255b2f0a047a2fdf69633a19cdc0b10ea60a5
# Parent eca057e4475ca455ec36f962b9179fd2c9674196
x86/mm/shadow: adjust early-unshadow heuristic for PAE guests.
PAE guests have 8-byte PTEs but tend to clear memory with 4-byte writes.
This means that when zeroing a former pagetable every second 4-byte
write is unaligned and so the consecutive-zeroes --> unshadow
heuristic never kicks in. Adjust the heuristic not to reset when
a write is >= 4 bytes and writing zero but not PTE-aligned.
Signed-off-by: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
---
diff -r eca057e4475c -r c91255b2f0a0 xen/arch/x86/mm/shadow/multi.c
--- a/xen/arch/x86/mm/shadow/multi.c Fri Jun 17 08:08:13 2011 +0100
+++ b/xen/arch/x86/mm/shadow/multi.c Mon Jun 20 13:16:14 2011 +0100
@@ -4918,11 +4918,14 @@
ASSERT(mfn_valid(sh_ctxt->mfn1));
/* If we are writing lots of PTE-aligned zeros, might want to unshadow */
- if ( likely(bytes >= 4)
- && (*(u32 *)addr == 0)
- && ((unsigned long) addr & ((sizeof (guest_intpte_t)) - 1)) == 0 )
- check_for_early_unshadow(v, sh_ctxt->mfn1);
- else
+ if ( likely(bytes >= 4) && (*(u32 *)addr == 0) )
+ {
+ if ( ((unsigned long) addr & ((sizeof (guest_intpte_t)) - 1)) == 0 )
+ check_for_early_unshadow(v, sh_ctxt->mfn1);
+ /* Don't reset the heuristic if we're writing zeros at non-aligned
+ * addresses, otherwise it doesn't catch REP MOVSD on PAE guests */
+ }
+ else
reset_early_unshadow(v);
/* We can avoid re-verifying the page contents after the write if:
_______________________________________________
Xen-changelog mailing list
Xen-changelog@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-changelog
|
<Prev in Thread] |
Current Thread |
[Next in Thread> |
- [Xen-changelog] [xen-unstable] x86/mm/shadow: adjust early-unshadow heuristic for PAE guests.,
Xen patchbot-unstable <=
|
|
|
|
|