This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] Re: [PATCH 3 of 4] Nested p2m: clarify logic in p2m_get_nest

To: Christoph Egger <Christoph.Egger@xxxxxxx>
Subject: [Xen-devel] Re: [PATCH 3 of 4] Nested p2m: clarify logic in p2m_get_nestedp2m()
From: Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Date: Fri, 24 Jun 2011 16:05:09 +0100
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 24 Jun 2011 08:08:01 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4E04A4F0.4090803@xxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <patchbomb.1308759026@xxxxxxxxxxxxxxxxxxxxxxx> <b265371addbbc8a58c95.1308759029@xxxxxxxxxxxxxxxxxxxxxxx> <4E049E64.9080908@xxxxxxx> <20110624143726.GI9784@xxxxxxxxxxxxxxxxxxxxxxx> <4E04A4F0.4090803@xxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.21 (2010-09-15)

At 15:53 +0100 on 24 Jun (1308930816), Christoph Egger wrote:
> > More generally, I think that you need to figure out exactly what
> > behaviour you want from this function.  For example in the current code
> > there's no way that two vcpus with the same ncr3 value can share a
> > nested-p2m.  Is that deliberate?
> By 'current code' do you mean with or w/o this patch ?

Both, and all versions of the code from before my current series to the
full series applied. 

> It is deliberate that two vcpus with the same ncr3 share a nested-p2m.

But they don't.  The code in current unstable tip does this:

    for (i = 0; i < MAX_NESTEDP2M; i++) {
        p2m = d->arch.nested_p2m[i];
        if ((p2m->cr3 != cr3 && p2m->cr3 != CR3_EADDR) || (p2m != nv->nv_p2m))

        // ... return this p2m

    /* All p2m's are or were in use. Take the least recent used one,
     * flush it and reuse.
    for (i = 0; i < MAX_NESTEDP2M; i++) {
        p2m = p2m_getlru_nestedp2m(d, NULL);
        rv = p2m_flush_locked(p2m);
        if (rv == 0)

    // ... return this p2m

The first loop never returns a p2m that's != nv->nv_p2m.  The second
loop always returns a fresh, flushed p2m.  So there's no way that two
different vcpus, starting with nv->nv_p2m == NULL, can ever get the same
p2m as each other. 

The pseudocode is basically: 
 - If I have an existing nv_p2m and it hasn't been flushed, reuse it. 
 - Else flush all np2ms in LRU order and return the last one flushed.

My patch 3/4 doesn't change the logic at all (I think); your latest fix
just avoids the over-aggressive flushing of all np2ms. 

> But fixing the p2m locking problem in upstream tree has a higher
> priority right now and we can work on that after the p2m locking
> issue is fixed upstream.

AFAICS the locking is fixed by the current set of patches (though I'm
still not able to run Xen-in-Xen well enough to test them).  I can send
the full series again for clarity if you like.  The outstanding bug is
that there are many more IPIs than previously; I suspect that your
latest fix will reduce them quite a lot by avoiding a storm of
mutually-destructive flush operations.  If the performance is still too
bad we can add more IPI-avoidance strategies.



Tim Deegan <Tim.Deegan@xxxxxxxxxx>
Principal Software Engineer, Xen Platform Team
Citrix Systems UK Ltd.  (Company #02937203, SL9 0BG)

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>