WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Q about System-wide Memory Management Strategies

To: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>
Subject: [Xen-devel] Q about System-wide Memory Management Strategies
From: Joanna Rutkowska <joanna@xxxxxxxxxxxxxxxxxxxxxx>
Date: Mon, 02 Aug 2010 23:38:56 +0200
Cc: qubes-devel@xxxxxxxxxxxxxxxx
Delivery-date: Mon, 02 Aug 2010 14:40:02 -0700
Dkim-signature: v=1; a=rsa-sha1; c=relaxed/relaxed; d=messagingengine.com; h=message-id:date:from:mime-version:to:cc:subject:content-type; s=smtpout; bh=N2x3bbQYVmSCblhskl1aMvQq5zs=; b=X9Tx2UHNM2u02ijZoHobPBQTH6h62ltNAdSreVZT2fkLTKrfv0fn8l6OxgKfcurzYRe2WSOGmucIoCArG+gRijQBjRDXe3BZywzS0qmEIpfBmyMz3bAyIFjFPY6jY5tSSkjm79IsddgxSa1PgCMMBRgzB8WEpbzngu60KBeAzh4=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux x86_64; en-US; rv:1.9.1.10) Gecko/20100621 Fedora/3.0.5-1.fc13 Lightning/1.0b2pre Thunderbird/3.0.5
Dan, Xen.org'ers,

I have a few questions regarding strategies for optimal memory
assignment among VMs (PV DomU and Dom0, all Linux-based).

We've been thinking about implementing a "Direct Ballooning" strategy
(as described on slide #20 in Dan's slides [1]), i.e. to write a daemon
that would be running in Dom0 and, based on the statistics provided by
ballond daemons running in DomUs, to adjust memory assigned to all VMs
in the system (via xm mem-set).

Rather than trying to maximize the number of VMs we could run at the
same time, in Qubes OS we are more interested in optimizing user
experience for running "reasonable number" of VMs (i.e.
minimizing/eliminating swapping). In other words, given the number of
VMs that the user feels the need to run at the same time (in practice
usually between 3-6), and given the amount of RAM in the system (4-6 GB
in practice today), how to optimally distribute it among the VMs? In our
model we assume the disk backend(s) are in Dom0.

Some specific questions:
1) What is the best estimator of the "ideal" amount of RAM each VM would
like to have? Dan mentions [1] the Commited_AS value from /proc/meminfo,
but what about the fs cache? I would expect that we should (ideally)
allocate Commited_AS + some_cache amount of RAM, no?

2) What's the best estimator for "minimal reasonable" amount of RAM for
VM (below which the swapping would kill the performance for good)? The
rationale behind this, is that if we couldn't allocate "ideal" amount of
RAM (point 1 above), then we would be scaling the available RAM down,
until this "reasonable minimum" value. Below this, we would display a
message to the user that they should close some VMs (or will close
"inactive" one automatically), and also we would refuse to start any new
AppVMs.

3) Assuming we have enough RAM to satisfy all the VMs' "ideal" requests,
what should we do with the excessive RAM -- options are:
a) distribute among all the VMs (more per-VM RAM, means larger FS
caches, means faster I/O), or
b) assign it to Dom0, where the disk backend is running (larger FS cache
means faster disk backends, means faster I/O in each VM?)

Thanks,
joanna.

[1]
http://www.xen.org/files/xensummitboston08/MemoryOvercommit-XenSummit2008.pdf

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel