This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-devel] Re: [RFC PATCH 0/4] (Take 2): transcendent memory ("tmem") f

To: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>
Subject: [Xen-devel] Re: [RFC PATCH 0/4] (Take 2): transcendent memory ("tmem") for Linux
From: Avi Kivity <avi@xxxxxxxxxx>
Date: Sun, 12 Jul 2009 12:20:49 +0300
Cc: npiggin@xxxxxxx, akpm@xxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, tmem-devel@xxxxxxxxxxxxxx, kurt.hackel@xxxxxxxxxx, Rusty Russell <rusty@xxxxxxxxxxxxxxx>, jeremy@xxxxxxxx, linux-kernel@xxxxxxxxxxxxxxx, linux-mm@xxxxxxxxx, sunil.mushran@xxxxxxxxxx, chris.mason@xxxxxxxxxx, Anthony Liguori <anthony@xxxxxxxxxxxxx>, Schwidefsky <schwidefsky@xxxxxxxxxx>, dave.mccracken@xxxxxxxxxx, Marcelo Tosatti <mtosatti@xxxxxxxxxx>, alan@xxxxxxxxxxxxxxxxxxx, Balbir Singh <balbir@xxxxxxxxxxxxxxxxxx>
Delivery-date: Sun, 12 Jul 2009 02:18:30 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <d693761e-2f2b-4d8c-ae4f-7f22479f6c0f@default>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <d693761e-2f2b-4d8c-ae4f-7f22479f6c0f@default>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1b3pre) Gecko/20090513 Fedora/3.0-2.3.beta2.fc11 Lightning/1.0pre Thunderbird/3.0b2
On 07/10/2009 06:23 PM, Dan Magenheimer wrote:
If there was one change to tmem that would make it more
palatable, for
me it would be changing the way pools are "allocated".  Instead of
getting an opaque handle from the hypervisor, I would force
the guest to
allocate it's own memory and to tell the hypervisor that it's a tmem

An interesting idea but one of the nice advantages of tmem being
completely external to the OS is that the tmem pool may be much
larger than the total memory available to the OS.  As an extreme
example, assume you have one 1GB guest on a physical machine that
has 64GB physical RAM.  The guest now has 1GB of directly-addressable
memory and 63GB of indirectly-addressable memory through tmem.
That 63GB requires no page structs or other data structures in the
guest.  And in the current (external) implementation, the size
of each pool is constantly changing, sometimes dramatically so
the guest would have to be prepared to handle this.  I also wonder
if this would make shared-tmem-pools more difficult.

Having no struct pages is also a downside; for example this guest cannot have more than 1GB of anonymous memory without swapping like mad. Swapping to tmem is fast but still a lot slower than having the memory available.

tmem makes life a lot easier to the hypervisor and to the guest, but also gives up a lot of flexibility. There's a difference between memory and a very fast synchronous backing store.

I can see how it might be useful for KVM though.  Once the
core API and all the hooks are in place, a KVM implementation of
tmem could attempt something like this.

My worry is that tmem for kvm leaves a lot of niftiness on the table, since it was designed for a hypervisor with much simpler memory management. kvm can already use spare memory for backing guest swap, and can already convert unused guest memory to free memory (by swapping it). tmem doesn't really integrate well with these capabilities.

error compiling committee.c: too many arguments to function

Xen-devel mailing list