This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] copy on write memory

To: Jacob Gorm Hansen <jacobg@xxxxxxx>
Subject: Re: [Xen-devel] copy on write memory
From: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>
Date: Fri, 19 Nov 2004 14:50:10 +0000
Cc: Keir Fraser <Keir.Fraser@xxxxxxxxxxxx>, Peri Hankey <mpah@xxxxxxxxxxxxxx>, urmk@xxxxxxxxxxxxxxxxx, Rik van Riel <riel@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 19 Nov 2004 14:51:21 +0000
Envelope-to: xen+James.Bulpin@xxxxxxxxxxxx
In-reply-to: Your message of "Fri, 19 Nov 2004 13:02:01 +0100." <419DE0B9.4030502@xxxxxxx>
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
> Could the same thing not work using an event-channel rather than a 
> hypercall then?  I guess you basically do the same when giving your 
> pages away for a driver to fill them up with data?
> My main point is that the domains have better knowledge about what pages 
> are likely to be shareable than dom0 or Xen has, and so should volunteer 
> to share them, and somehow be rewarded.

Equally, a centralised "buffer cache" domain can see request traffic
and observe empirically what pages are most beneficial to share. :-)
Both ways round could be interesting to experiment with though.

> The problem of reclamation-policy will exist for any solution that 
> over-reserves memory, including the transparent VMWare system. For some 
> pages, like the guest OS kernel text area, it would be ok to remove 
> these pages from the domain's allowance for good -- it will not need to 
> CoW these, and the domain builder could simply build that part of the 
> domain from shared pages.

Well, you also can over-commit on stuff that is read-only and fault in
on demand, just as you can demand-CoW writable stuff e.g., no need to
have all of kernel or glibc in memory all the time -- only hot parts
of both will be in use by the system at any time.

 1. There is fault in from no page -> shareable page on read accesses.
 2. There is fault from shareable page -> shareable page + exclusive
    page on write accesses. 
 Both of these require extra allocation of memory.

> Perhaps this should just be a one-way street, you give up pages to be 
> nice to others (and get cheaper hosting or whatever kind of reward you 
> can think of in return), and then you lose the right to write to them 
> for good.  Should you need more writable pages, you will have to re-grow 
> your reservation, and if that fails you will need to flush some slabs or 
> buffer caches or or page stuff to disk or whatever you do in Linux when 
> you have memory pressure.  Ultimately you may want to migrate to a less 
> loaded machine.

It's another way of looking at the problem (end-to-end style I
suppose). Potetntially worth investigating. :-)


This SF.Net email is sponsored by: InterSystems CACHE
FREE OODBMS DOWNLOAD - A multidimensional database that combines
robust object and relational technologies, making it a perfect match
for Java, C++,COM, XML, ODBC and JDBC. www.intersystems.com/match8
Xen-devel mailing list