WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] Fix restore handling checks

To: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] Fix restore handling checks
From: Michal Novotny <minovotn@xxxxxxxxxx>
Date: Wed, 23 Jun 2010 13:21:34 +0200
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Wed, 23 Jun 2010 04:22:27 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <b0eca5df-d715-41f4-b774-04f183293ac5@default>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <C846136B.18263%keir.fraser@xxxxxxxxxxxxx> <4C205583.80609@xxxxxxxxxx 4C20B2F8.4030409@xxxxxxxxxx> <b0eca5df-d715-41f4-b774-04f183293ac5@default>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.1.9) Gecko/20100430 Fedora/3.0.4-3.fc13 Thunderbird/3.0.4
On 06/22/2010 10:46 PM, Dan Magenheimer wrote:
Correct me if I am wrong, but I think your patch assumes
that the amount of free memory in the system can be
computed by assuming each guest memory is fixed size.
Due to various features in Xen 4.0, this is no longer
a safe assumption.  Tmem has a libxc call to freeze
and unfreeze its use of memory so dynamic memory use
by tmem can be stopped

Maybe it's stopped but in the domain_getinfo() of libxc we should be getting the value of original guest memory although it's being freezed but the only difference is that the memory should not be accessible since it's locked somehow. Is my understanding of tmem freeze correct?

and another libxc call to
determine "freeable" memory, and another to free it.
I don't know if the page-sharing functionality added
at 4.0 has anything similar.

But in any case, simple algorithms to add up current
(or max) guest memory will have many false-positive
and false-negative results.


Why should it give too many false-positives/false-negatives since the handling there is to sum the total guest memory and decrease the computed size from the total host memory according to physinfo() output from libxc. Also there should be the minimal memory for dom0 taken in account since. There's the example for my configuration - I'm having 8G of RAM in total, if I start up one guest with 2G of RAM allocated, we should be having 8 - 2 = 6 G available now (no matter what amount of memory is being allocated to the dom0 since the physinfo() is getting the total memory information from hypervisor directly, i.e. you could be having 4G allocated to dom0 but the host machine could be having 8G of RAM in total).

1. total physical memory = 8G
2. dom0_mem = 4G, dom0-min-mem = 1G
3. create the guest A with 2G RAM -> 6G in total are available now
4. create the guest B with 4G RAM -> 4G should be available but guest is still on migration/restore 5. In the middle of the guest restore/migrate from step 4 (guest B) we start another migration/restore of 2G guest (guest C), since the guest B is having already 2G memory, that way "mem_kb" equals to 2G for guest B (instead of 4G) so we have to take "maxmem_kb" instead (i.e. 4G value) to compute we don't have enough memory for guest C creation.

If we used "mem_kb" in all the cases (even for migration/restore case) we would sum up the value to be: 2 + 2 (there should be 4 since the guest is restoring right now) + 2 = 6G which is less than 8G (total memory) - 1G (dom0-min-mem) = 7G so it would allow the guest creation which would result into failure when the migration/restore of guest C fails and therefore the guest C will be destroyed with incomplete memory transfer.

That's why I used the computation of "maxmem_kb" instead, since for this scenario the value is: 2 + 4 + 2 = 8G which is bigger than 7G (total memory - dom0-min-mem) so we disallow the guest restore immediately.

So what should those calculations give many false-positives or false-negatives?

Michal

--
Michal Novotny<minovotn@xxxxxxxxxx>, RHCE
Virtualization Team (xen userspace), Red Hat


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>