On Fri, 2009-05-29 at 09:00 +0800, Tim Post wrote:
> Right now, what we're doing is not quite overcommitment, its more like
> accounting. By placing the output of sysinfo() and more (bits
> of /proc/meminfo) on Xenbus, its easy to get a bird's eye view of what
> domains are under or over utilizing their given RAM. If a domain has
> 1GB, yet its kernel is consistently committing only 384MB (actual size),
> there's a good chance that the guest would do just as well with 512MB,
> depending on its buffer use. The reverse is also true. Its looking at
> the whole VM big picture, including buffers, swap, etc.
Sorry, forgot to mention, average (aggregate) IOWAIT is also a key
factor. Users can do odd things like bypass buffers with relational
databases. So, when we see the kernel overselling, next to nill buffers
and a very high aggregate average IOWAIT across all vcpus, we have a
pretty good idea of what's going on.
Xenbus/Xenstore exists, the combined size of these vitals are small ..
until admin friendly introspection surfaces, its really the best way to
put any given host under a stereo microscope.
The problem is differentiating disk I/O from network I/O.
Cheers,
--Tim
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|