Dan, Xen.org'ers,
I have a few questions regarding strategies for optimal memory
assignment among VMs (PV DomU and Dom0, all Linux-based).
We've been thinking about implementing a "Direct Ballooning" strategy
(as described on slide #20 in Dan's slides [1]), i.e. to write a daemon
that would be running in Dom0 and, based on the statistics provided by
ballond daemons running in DomUs, to adjust memory assigned to all VMs
in the system (via xm mem-set).
Rather than trying to maximize the number of VMs we could run at the
same time, in Qubes OS we are more interested in optimizing user
experience for running "reasonable number" of VMs (i.e.
minimizing/eliminating swapping). In other words, given the number of
VMs that the user feels the need to run at the same time (in practice
usually between 3-6), and given the amount of RAM in the system (4-6 GB
in practice today), how to optimally distribute it among the VMs? In our
model we assume the disk backend(s) are in Dom0.
Some specific questions:
1) What is the best estimator of the "ideal" amount of RAM each VM would
like to have? Dan mentions [1] the Commited_AS value from /proc/meminfo,
but what about the fs cache? I would expect that we should (ideally)
allocate Commited_AS + some_cache amount of RAM, no?
2) What's the best estimator for "minimal reasonable" amount of RAM for
VM (below which the swapping would kill the performance for good)? The
rationale behind this, is that if we couldn't allocate "ideal" amount of
RAM (point 1 above), then we would be scaling the available RAM down,
until this "reasonable minimum" value. Below this, we would display a
message to the user that they should close some VMs (or will close
"inactive" one automatically), and also we would refuse to start any new
AppVMs.
3) Assuming we have enough RAM to satisfy all the VMs' "ideal" requests,
what should we do with the excessive RAM -- options are:
a) distribute among all the VMs (more per-VM RAM, means larger FS
caches, means faster I/O), or
b) assign it to Dom0, where the disk backend is running (larger FS cache
means faster disk backends, means faster I/O in each VM?)
Thanks,
joanna.
[1]
http://www.xen.org/files/xensummitboston08/MemoryOvercommit-XenSummit2008.pdf
signature.asc
Description: OpenPGP digital signature
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|