|  |  | 
  
    |  |  | 
 
  |   |  | 
  
    |  |  | 
  
    |  |  | 
  
    |   xen-devel
Re: [Xen-devel] [PATCH] turn off writable page tables 
| 
Keir Fraser wrote:
 And it does make a difference in this case.  I now have a test program 
which dirties a number of virtually contiguous pages then forks (it also 
resets xen perf counters before fork and collects perf counters right 
after fork), then records the elapsed time for the fork.  The difference 
is quite amazing in this case.  For both writable and emulate, I ran 
with a range of dirty pages, from 1280 to 128000.  The elapsed times for 
fork a quite linear from small number to large number of dirty pages. 
Below are the min and max:
On 26 Jul 2006, at 09:18, Gerd Hoffmann wrote:
 
I'd like to make sure there's no 'dumb stuff' happening, and the
writeable pagetables isn't being used erroneously where we don't expect
it (hence crippling the scores), and that its actually functioning as
intended i.e. that we get one fault to unhook, and then a fault causing
a rehook once we move to the next page in the fork.
If you write a little test program that dirties a large chunk of memory
just before the fork, we should see writeable pagetables winning 
easily.
 
Just an idea:  Any chance mm_pin() and mm_unpin() cause this?  The bulk
page table updates for the new process created by fork() are not seen by
xen anyway I think.  The first schedule of the new process triggers
pinning, i.e. r/o mapping and verification ...
 
The batching should still benefit the write-protecting of the parent 
pagetables, which are visible to Xen during fork() (since the fork() 
runs on them!). 
Hence the suggestion of dirtying pages before the fork -- that will 
ensure that lots of PTEs are definitely writable, and so they will 
have to be updated to make them read-only. 
 
        1280 pages    128000 pages
wtpt:     813 usec      37552 usec 
emulate: 3279 usec     283879 usec
The perf counters showed just about every writable page had all entries 
modified (for 128000 pages below): 
writable pt updates: total: 253  all entries updated: 250
So, in a -perfect-world- this works great.  Problem is most workloads 
don't appear to have a vast percentage of entries that need to be 
updated.   I'll go ahead and  expand this test to find out what the 
threshold is to break even.  I'll also see if we can implement a batched 
call in fork to update the parent -I hope this will show just as good 
performance even when most entries need modification and even better 
performance over wtpt with a low number of entries modified. 
-Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
 | 
 |  | 
  
    |  |  |