This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] RE: vram_dirty vs. shadow paging dirty tracking

To: "Anthony Liguori" <aliguori@xxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] RE: vram_dirty vs. shadow paging dirty tracking
From: "Ian Pratt" <Ian.Pratt@xxxxxxxxxxxx>
Date: Tue, 13 Mar 2007 21:02:23 -0000
Delivery-date: Tue, 13 Mar 2007 14:05:18 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <45F6FC68.3040207@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcdlpnoXH6PGBcUCTMy5+C0qlQcRlwACtz7A
Thread-topic: vram_dirty vs. shadow paging dirty tracking
> When thinking about multithreading the device model, it occurred to me
> that it's a little odd that we're doing a memcmp to determine which
> portions of the VRAM has changed.  Couldn't we just use dirty page
> tracking in the shadow paging code?  That should significantly lower
> the
> overhead of this plus I believe the infrastructure is already mostly
> there in the shadow2 code.

Yep, its been in the roadmap doc for quite a while. However, the log
dirty code isn't ideal for this. We'd need to extend it to enable it to
be turned on for just a subset of the GFN range (we could use a xen
rangeset for this).

Even so, I'm not super keen on the idea of tearing down and rebuilding
1024 PTE's up to 50 times a second. 

A lower overhead solution would be to do scanning and resetting of the
dirty bits on the PTEs (and a global tlb flush). In the general case
this is tricky as the framebuffer could be mapped by multiple PTEs. In
practice, I believe this doesn't happen for either Linux or Windows.
There's always a good fallback of just returning 'all dirty' if the
heuristic is violated. Would be good to knock this up.


Xen-devel mailing list