|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] [PATCH] Fix restore handling checks
On 06/22/2010 10:46 PM, Dan Magenheimer wrote:
Correct me if I am wrong, but I think your patch assumes
that the amount of free memory in the system can be
computed by assuming each guest memory is fixed size.
Due to various features in Xen 4.0, this is no longer
a safe assumption. Tmem has a libxc call to freeze
and unfreeze its use of memory so dynamic memory use
by tmem can be stopped
Maybe it's stopped but in the domain_getinfo() of libxc we should be
getting the value of original guest memory although it's being freezed
but the only difference is that the memory should not be accessible
since it's locked somehow. Is my understanding of tmem freeze correct?
and another libxc call to
determine "freeable" memory, and another to free it.
I don't know if the page-sharing functionality added
at 4.0 has anything similar.
But in any case, simple algorithms to add up current
(or max) guest memory will have many false-positive
and false-negative results.
Why should it give too many false-positives/false-negatives since the
handling there is to sum the total guest memory and decrease the
computed size from the total host memory according to physinfo() output
from libxc. Also there should be the minimal memory for dom0 taken in
account since. There's the example for my configuration - I'm having 8G
of RAM in total, if I start up one guest with 2G of RAM allocated, we
should be having 8 - 2 = 6 G available now (no matter what amount of
memory is being allocated to the dom0 since the physinfo() is getting
the total memory information from hypervisor directly, i.e. you could be
having 4G allocated to dom0 but the host machine could be having 8G of
RAM in total).
1. total physical memory = 8G
2. dom0_mem = 4G, dom0-min-mem = 1G
3. create the guest A with 2G RAM -> 6G in total are available now
4. create the guest B with 4G RAM -> 4G should be available but guest is
still on migration/restore
5. In the middle of the guest restore/migrate from step 4 (guest B) we
start another migration/restore of 2G guest (guest C), since the guest B
is having already 2G memory, that way "mem_kb" equals to 2G for guest B
(instead of 4G) so we have to take "maxmem_kb" instead (i.e. 4G value)
to compute we don't have enough memory for guest C creation.
If we used "mem_kb" in all the cases (even for migration/restore case)
we would sum up the value to be: 2 + 2 (there should be 4 since the
guest is restoring right now) + 2 = 6G which is less than 8G (total
memory) - 1G (dom0-min-mem) = 7G so it would allow the guest creation
which would result into failure when the migration/restore of guest C
fails and therefore the guest C will be destroyed with incomplete memory
transfer.
That's why I used the computation of "maxmem_kb" instead, since for this
scenario the value is: 2 + 4 + 2 = 8G which is bigger than 7G (total
memory - dom0-min-mem) so we disallow the guest restore immediately.
So what should those calculations give many false-positives or
false-negatives?
Michal
--
Michal Novotny<minovotn@xxxxxxxxxx>, RHCE
Virtualization Team (xen userspace), Red Hat
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|