This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] Re: vram_dirty vs. shadow paging dirty tracking

To: "Anthony Liguori" <aliguori@xxxxxxxxxx>, "Ian Pratt" <Ian.Pratt@xxxxxxxxxxxx>
Subject: RE: [Xen-devel] Re: vram_dirty vs. shadow paging dirty tracking
From: "Ian Pratt" <Ian.Pratt@xxxxxxxxxxxx>
Date: Wed, 14 Mar 2007 00:17:46 -0000
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 13 Mar 2007 17:20:35 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <45F6FC68.3040207@xxxxxxxxxx><8A87A9A84C201449A0C56B728ACF491E0B9DBF@xxxxxxxxxxxxxxxxxxxxxxxxxxx> <45F717EE.5040900@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcdltyUt779ztHMWQvuXS09oZb+tFQAFZfiA
Thread-topic: [Xen-devel] Re: vram_dirty vs. shadow paging dirty tracking
> > Yep, its been in the roadmap doc for quite a while. However, the log
> > dirty code isn't ideal for this. We'd need to extend it to enable it
> to
> > be turned on for just a subset of the GFN range (we could use a xen
> > rangeset for this).
> >
> Okay, I was curious if the log dirty stuff could do ranges.  I guess
> not.

It could certainly be added, but I prefer the dirty bit solution to this
particular problem. 
> > Even so, I'm not super keen on the idea of tearing down and
> rebuilding
> > 1024 PTE's up to 50 times a second.
> >
> > A lower overhead solution would be to do scanning and resetting of
> the
> > dirty bits on the PTEs (and a global tlb flush).
> Right, this is the approach I was assuming.  There's really no use in
> tearing down the whole PTE (since you would have to take an extraneous
> read fault).
> > In the general case
> > this is tricky as the framebuffer could be mapped by multiple PTEs.
> In
> > practice, I believe this doesn't happen for either Linux or Windows.
> >
> I wouldn't think so, but showing my ignorance for a moment, does
> shadow2 not provide a mechanism to lookup VA's given a GFN?  This
lookup could
> be cheap if the structures are built during shadow page table
> construction.

No, it deliberately doesn't because threading all the PTEs that point to
a GFN can consume quite a bit of memory, introduces locking complexity
that will effect future scalability, and turns out to be completely
unnecessary for normal shadow mode operation because some simple
heuristics get a near-perfect hit rate.

> Sounds like this is a good long term goal but I think I'll stick with
> the threading as an intermediate goal.

Yes, that's more immediately useful, thanks.

> I've got a minor concern that threading isn't going to help us much
> when
> dom0 is UP since the VGA scanning won't happen while an MMIO/PIO
> request happens.  

I think the VGA scanning burns enough CPU to stand a good chance of
getting pre-empted when an MMIO/PIO request arrives. We need to make
sure there's no synchronization required that prevents this.


Xen-devel mailing list