WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [RFC] Transcendent Memory ("tmem"): a new approach to ph

To: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>
Subject: Re: [Xen-devel] [RFC] Transcendent Memory ("tmem"): a new approach to physical memory management
From: Alan Cox <alan@xxxxxxxxxxxxxxxxxxx>
Date: Fri, 9 Jan 2009 18:37:37 +0000
Cc: "Xen-Devel \(E-mail\)" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 09 Jan 2009 10:37:40 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <3e92aa6e-2348-4e6a-b4e4-454915b72212@default>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Organization: Red Hat UK Cyf., Amberley Place, 107-111 Peascod Street, Windsor, Berkshire, SL4 1TE, Y Deyrnas Gyfunol. Cofrestrwyd yng Nghymru a Lloegr o'r rhif cofrestru 3798903
References: <20090108213819.6013b56d@xxxxxxxxxxxxxxxxxxx> <3e92aa6e-2348-4e6a-b4e4-454915b72212@default>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
> Could you point me to the two one liners?  If they are
> simply informing the hypervisor, that is certainly a step
> in the right direction.  IBM's CMM2 is of course much more

Yes - they hook arch_free_page and arch_alloc_page so that free pages are
known to the hypervisor layer.

> the Linux kernel actively participates in the "admission
> policy" so this information need not be inferred outside
> of the kernel by the hypervisor.

Yes - the patches are very interesting and you take it a stage further
than the S/390 hooks by exposing a lot more to the hypervisor.

> I'm not trying to implement distributed shared memory.
> There is no "across the network" except what the cluster
> fs handles already.  The clusterfs must ensure that the

That was what confused me about the shared pools. I had assumed that
shared pools would imply DSM simply because two guests could use a shared
pool and one of them get live migrated without the other.

> SSD interface or might help for hot-swap memory.

Not something I'd thought about. The problem with hot swap is generally
one of managing to get stuff removed from a given physical page of RAM.
Having more flexible allocators probably helps there simply because you
can make space to relocate pages underneath the guest.

> I also think it might be used like compcache, but with
> the advantage that clean page cache pages can also be
> compressed.

Would it be useful to distinguish between pages the OS definitely doesn't
care about (freed) and pages that can vanish, at least in terms of
reclaiming them between guests. It seems that truely free pages are the
first target and there may even be a proper heirarchy.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel