This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] Re: [PATCH] Allow removing writable mappings from splint

To: deshantm@xxxxxxxxx
Subject: Re: [Xen-devel] Re: [PATCH] Allow removing writable mappings from splintered page tables.
From: "George Dunlap" <George.Dunlap@xxxxxxxxxxxxx>
Date: Tue, 16 Sep 2008 14:46:23 +0100
Cc: Gianluca Guida <gianluca.guida@xxxxxxxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Tue, 16 Sep 2008 06:46:48 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:received:received:message-id:date:from:sender :to:subject:cc:in-reply-to:mime-version:content-type :content-transfer-encoding:content-disposition:references :x-google-sender-auth; bh=uQhxjDu81Er/IGzOlbCeExQm/gEp/OmUoQQYNMcnqYU=; b=AwQwLF+gDGNiC6zzQv/3bv/xnOdJz9w7l2eeJQ/ujZ/W9ffZxb6mzWytW/Z5UQaH/A x7shXTLppaM8bJ4FAGdj8AakHCNEt6/EzcRFbn/WQ5TrusQiIyMFvaPDcoeg8vSaCSFq LK3ClYpXE9IgIymOcoxv+NPlnIm7TpBmgkv5w=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=message-id:date:from:sender:to:subject:cc:in-reply-to:mime-version :content-type:content-transfer-encoding:content-disposition :references:x-google-sender-auth; b=Yui7S9n0GJd376PNGa+1ekJyCiMoShSMuH+dTJo+tDBZxJRqfESAjUaBa3VMKs0LCt NQzm7f/kMujw31+kKLTlv3geAaCzEqb7lu2AcEVl874VpDfJ1vp9XFuZxuNzAWqeen0Y nNXUFc408GX+jJ9hCUj/fRJblO8+wdpPvntsA=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <de76405a0809151003p5bae1b10la7afa0a5a3b15440@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <48CA9E19.2030201@xxxxxxxxxxxxx> <1e16a9ed0809121008q66f8d6b8w53337d6157fc0542@xxxxxxxxxxxxxx> <48CAA369.7020205@xxxxxxxxxxxxx> <1e16a9ed0809121252v254889e5x6ea046865c66cd95@xxxxxxxxxxxxxx> <de76405a0809150338x71d89418sa3fe659756d35fe5@xxxxxxxxxxxxxx> <1e16a9ed0809150930jc3b10c6i4af96161fe0860b@xxxxxxxxxxxxxx> <de76405a0809151003p5bae1b10la7afa0a5a3b15440@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Hmm, no really obvious low-hanging fruit.  The Xen-HVM was about 9%
slower than your reported numbers for Xen-PV, and the trace shows that
the guest spent about that much inside the hypervisor.  The breakdown:
* 3.6% propagating page faults to guest
* 3.0% pulling through entries from out-of-sync guest pt's to shadow pagetables
* 1.4% Marking pages out of sync (of which 75% was in unsyncs that had
to re-sync another page)
* 0.9% cr3 switches
* 0.9% handling I/O

(Rounding may cause the numbers not to add up exactly.)

So one of the biggest things, really, is that Linux seems to insist on
mapping pages one-at-a-time as they're demand-faulted, rather than
doing a batch of them.  Unfortunately, having pages out-of-sync means
that we must use the slow propagate path rather than the
fast-propagate path, which is at least 25% slower.

The only avenues for optimization I can see are:
* See if there's a way to reduce the number of unsyncs that cause
resyncs.  Allowing more pages to go out-of-sync *might* do this; or it
might just shift the same overhead into cr3 switch.
* Reduce the time of "hot paths" through the hypervisor by profiling, &c.


On Mon, Sep 15, 2008 at 6:03 PM, George Dunlap
<George.Dunlap@xxxxxxxxxxxxx> wrote:
> Heh... the blatant copying is flattering and annoying at the same
> time. :-)  Ah, the beauty of open-source...
> I've got your trace, and I'll take a look at it tomorrow. Thanks!
>  -George
> On Mon, Sep 15, 2008 at 5:30 PM, Todd Deshane <deshantm@xxxxxxxxx> wrote:
>> On Mon, Sep 15, 2008 at 6:38 AM, George Dunlap
>> <George.Dunlap@xxxxxxxxxxxxx> wrote:
>>> And your original numbers showed elapsed time to be 527s for KVM, so
>>> now Xen is 8 seconds in the lead for HVM Linux. :-)  Thanks for the
>>> help tracking this down!
>> KVM is also working on improved page table algorithms
>> http://www.mail-archive.com/kvm@xxxxxxxxxxxxxxx/msg03562.html
>> I think the competition is a good thing.
>>> If you have time, could you take another 30-second trace with the new
>>> changes in, just for fun?  I'll take a quick look and see if there's
>>> any other low-hanging fruit to grab.
>> Sent the trace to you with another service called sendspace, since, for
>> some reason, the trace file was much bigger.
>> Todd
>> --
>> Todd Deshane
>> http://todddeshane.net
>> check out our book: http://runningxen.com
>> _______________________________________________
>> Xen-devel mailing list
>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xen-devel

Xen-devel mailing list