Hmm, no really obvious low-hanging fruit. The Xen-HVM was about 9%
slower than your reported numbers for Xen-PV, and the trace shows that
the guest spent about that much inside the hypervisor. The breakdown:
* 3.6% propagating page faults to guest
* 3.0% pulling through entries from out-of-sync guest pt's to shadow pagetables
* 1.4% Marking pages out of sync (of which 75% was in unsyncs that had
to re-sync another page)
* 0.9% cr3 switches
* 0.9% handling I/O
(Rounding may cause the numbers not to add up exactly.)
So one of the biggest things, really, is that Linux seems to insist on
mapping pages one-at-a-time as they're demand-faulted, rather than
doing a batch of them. Unfortunately, having pages out-of-sync means
that we must use the slow propagate path rather than the
fast-propagate path, which is at least 25% slower.
The only avenues for optimization I can see are:
* See if there's a way to reduce the number of unsyncs that cause
resyncs. Allowing more pages to go out-of-sync *might* do this; or it
might just shift the same overhead into cr3 switch.
* Reduce the time of "hot paths" through the hypervisor by profiling, &c.
On Mon, Sep 15, 2008 at 6:03 PM, George Dunlap
> Heh... the blatant copying is flattering and annoying at the same
> time. :-) Ah, the beauty of open-source...
> I've got your trace, and I'll take a look at it tomorrow. Thanks!
> On Mon, Sep 15, 2008 at 5:30 PM, Todd Deshane <deshantm@xxxxxxxxx> wrote:
>> On Mon, Sep 15, 2008 at 6:38 AM, George Dunlap
>> <George.Dunlap@xxxxxxxxxxxxx> wrote:
>>> And your original numbers showed elapsed time to be 527s for KVM, so
>>> now Xen is 8 seconds in the lead for HVM Linux. :-) Thanks for the
>>> help tracking this down!
>> KVM is also working on improved page table algorithms
>> I think the competition is a good thing.
>>> If you have time, could you take another 30-second trace with the new
>>> changes in, just for fun? I'll take a quick look and see if there's
>>> any other low-hanging fruit to grab.
>> Sent the trace to you with another service called sendspace, since, for
>> some reason, the trace file was much bigger.
>> Todd Deshane
>> check out our book: http://runningxen.com
>> Xen-devel mailing list
Xen-devel mailing list