Hi Joanna --
The slides you refer to are over two years old, and there's
been a lot of progress in this area since then. I suggest
you google for "Transcendent Memory" and especially
my presentation at the most recent Xen Summit North America
Specifically, I now have "selfballooning" built into
the guest kernel... I don't see direct ballooning as
feasible (certainly without other guest changes such
as cleancache and frontswap).
Anyway, I have limited availability in the next couple of
weeks but would love to talk (or email) more about
this topic after that (but would welcome clarification
questions in the meantime).
> -----Original Message-----
> From: Joanna Rutkowska [mailto:joanna@xxxxxxxxxxxxxxxxxxxxxx]
> Sent: Monday, August 02, 2010 3:39 PM
> To: xen-devel@xxxxxxxxxxxxxxxxxxx; Dan Magenheimer
> Cc: qubes-devel@xxxxxxxxxxxxxxxx
> Subject: Q about System-wide Memory Management Strategies
> Dan, Xen.org'ers,
> I have a few questions regarding strategies for optimal memory
> assignment among VMs (PV DomU and Dom0, all Linux-based).
> We've been thinking about implementing a "Direct Ballooning" strategy
> (as described on slide #20 in Dan's slides ), i.e. to write a daemon
> that would be running in Dom0 and, based on the statistics provided by
> ballond daemons running in DomUs, to adjust memory assigned to all VMs
> in the system (via xm mem-set).
> Rather than trying to maximize the number of VMs we could run at the
> same time, in Qubes OS we are more interested in optimizing user
> experience for running "reasonable number" of VMs (i.e.
> minimizing/eliminating swapping). In other words, given the number of
> VMs that the user feels the need to run at the same time (in practice
> usually between 3-6), and given the amount of RAM in the system (4-6 GB
> in practice today), how to optimally distribute it among the VMs? In
> model we assume the disk backend(s) are in Dom0.
> Some specific questions:
> 1) What is the best estimator of the "ideal" amount of RAM each VM
> like to have? Dan mentions  the Commited_AS value from
> but what about the fs cache? I would expect that we should (ideally)
> allocate Commited_AS + some_cache amount of RAM, no?
> 2) What's the best estimator for "minimal reasonable" amount of RAM for
> VM (below which the swapping would kill the performance for good)? The
> rationale behind this, is that if we couldn't allocate "ideal" amount
> RAM (point 1 above), then we would be scaling the available RAM down,
> until this "reasonable minimum" value. Below this, we would display a
> message to the user that they should close some VMs (or will close
> "inactive" one automatically), and also we would refuse to start any
> 3) Assuming we have enough RAM to satisfy all the VMs' "ideal"
> what should we do with the excessive RAM -- options are:
> a) distribute among all the VMs (more per-VM RAM, means larger FS
> caches, means faster I/O), or
> b) assign it to Dom0, where the disk backend is running (larger FS
> means faster disk backends, means faster I/O in each VM?)
Xen-devel mailing list