This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] shadow OOS and fast path are incompatible

To: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] shadow OOS and fast path are incompatible
From: Frank van der Linden <Frank.Vanderlinden@xxxxxxx>
Date: Thu, 02 Jul 2009 14:30:22 -0600
Delivery-date: Thu, 02 Jul 2009 13:31:43 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Thunderbird (X11/20090323)
We recently observed a problem with Solaris HVM domains. The bug was seen was seen with a higher number of VCPUs (3 or more), and always had the same pattern: some memory was allocated in the guest, but the first reference caused it to crash with a fatal pagefault. However, on inspection of the page tables, the guests' view of the pagetables was consistent: the page was present.

Disabling the out-of-sync optimization made this problem go away.

Eventually, I tracked it down to the fault fast path and the OOS code in sh_page_fault(). Here's what happens:

* CPU 0 has a page fault for a PTE in an OOS page that hasn't been synched yet
* CPU 1 has the same page fault (or at least one involving the same L1 page)
* CPU 1 enters the fast path
* CPU 0 finds the L1 page OOS and starts a resync
* CPU 1 finds it's a "special" entry (mmio or gnp)
* CPU 0 finishes resync, clears OOS flag for the L1 page
* CPU 1 finds it's not an OOS L1 page
* CPU 1 finds that the shadow L1 entry is GNP
* CPU 1 bounces fault to guest (sh_page_fault returns 0)
* guest sees an unexpected page fault

There are certainly ways to rearrange the code to avoid this particular scenario, but it points to a bigger issue: the fast fault path and OOS pages are inherently incompatible. Since the fast path works outside of the shadow lock, there is nothing that prevents another CPU coming in and changing the OOS status, re-syncing the page, etc, right under your nose.

Optimized operations without OOS (i.e. on a single L1 PTE) are safe in the fast path outside of the lock, since the guest will have the appropriate locking around the PTE writes. But with OOS, you're dealing with an entire L1 page.

I haven't checked the fast emulation path, but similar problems might be lurking there in combination with OOS.

I can think of some ways to fix this, but they involve locking, which mostly defeats the purpose of the fast fault path.


- Frank

Xen-devel mailing list

<Prev in Thread] Current Thread [Next in Thread>