|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] Re: [RFC PATCH 0/4] (Take 2): transcendent memory ("tmem
To: |
Dan Magenheimer <dan.magenheimer@xxxxxxxxxx> |
Subject: |
Re: [Xen-devel] Re: [RFC PATCH 0/4] (Take 2): transcendent memory ("tmem") for Linux |
From: |
Avi Kivity <avi@xxxxxxxxxx> |
Date: |
Mon, 13 Jul 2009 14:33:51 +0300 |
Cc: |
npiggin@xxxxxxx, akpm@xxxxxxxx, jeremy@xxxxxxxx, xen-devel@xxxxxxxxxxxxxxxxxxx, tmem-devel@xxxxxxxxxxxxxx, kurt.hackel@xxxxxxxxxx, Rusty Russell <rusty@xxxxxxxxxxxxxxx>, linux-kernel@xxxxxxxxxxxxxxx, dave.mccracken@xxxxxxxxxx, linux-mm@xxxxxxxxx, sunil.mushran@xxxxxxxxxx, chris.mason@xxxxxxxxxx, Anthony Liguori <anthony@xxxxxxxxxxxxx>, Schwidefsky <schwidefsky@xxxxxxxxxx>, Marcelo Tosatti <mtosatti@xxxxxxxxxx>, alan@xxxxxxxxxxxxxxxxxxx, Balbir Singh <balbir@xxxxxxxxxxxxxxxxxx> |
Delivery-date: |
Mon, 13 Jul 2009 04:31:21 -0700 |
Envelope-to: |
www-data@xxxxxxxxxxxxxxxxxxx |
In-reply-to: |
<d05df0b0-e932-4525-8c9e-93f6cb792903@default> |
List-help: |
<mailto:xen-devel-request@lists.xensource.com?subject=help> |
List-id: |
Xen developer discussion <xen-devel.lists.xensource.com> |
List-post: |
<mailto:xen-devel@lists.xensource.com> |
List-subscribe: |
<http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe> |
List-unsubscribe: |
<http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe> |
References: |
<d05df0b0-e932-4525-8c9e-93f6cb792903@default> |
Sender: |
xen-devel-bounces@xxxxxxxxxxxxxxxxxxx |
User-agent: |
Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1b3pre) Gecko/20090513 Fedora/3.0-2.3.beta2.fc11 Lightning/1.0pre Thunderbird/3.0b2 |
On 07/13/2009 12:08 AM, Dan Magenheimer wrote:
Can you explain how it differs for the swap case? Maybe I don't
understand how tmem preswap works.
The key differences I see are the "please may I store something"
API and the fact that the reply (yes or no) can vary across time
depending on the state of the collective of guests. Virtual
disk cacheing requires the host to always say yes and always
deliver persistence.
We need to compare tmem+swap to swap+cache, not just tmem to cache.
Here's how I see it:
tmem+swap swapout:
- guest copies page to tmem (may fail)
- guest writes page to disk
cached drive swapout:
- guest writes page to disk
- host copies page to cache
tmem+swap swapin:
- guest reads page from tmem (may fail)
- on tmem failure, guest reads swap from disk
- guest drops tmem page
cached drive swapin:
- guest reads page from disk
- host may satisfy read from cache
tmem+swap ageing:
- host may drop tmem page at any time
cached drive ageing:
- host may drop cached page at any time
So they're pretty similar. The main difference is that tmem can drop
the page on swapin. It could be made to work with swap by supporting
the TRIM command.
I can see that this is less of a concern
for KVM because the host can swap... though doesn't this hide
information from the guest and potentially have split-brain
swapping issues?
Double swap is bad for performance, yes. CMM2 addresses it nicely.
tmem doesn't address it at all - it assumes you have excess memory.
(thanks for the great discussion so far... going offline mostly now
for a few days)
I'm going offline too so it cancels out.
--
error compiling committee.c: too many arguments to function
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|