WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

re: [Xen-devel] Xen balloon driver discuss

To build a new balloon driver is much easier than I thought, so I quickly
get more results

Create domain like:

xm cr xxx.hvm maxmem=2048 memory=512

In new the cur_page is 525312(larger than Pod Entry, so after balloon
pod_entry == pod_cached well be satisfied),
that is 2052M, later I found that this number comes from domain->max_pages.

In /local/domain/did/memory/target is 524288, that is 512M
Inside guest, from /proc/meminfo, the total memory is 482236KB, that is
470.93M

Strange is
balloon driver holds memory = 2052 - 512 = 1540M
And the guest actually has 470.93M
1540 + 470.93 = 2010.93 < 2048

So I wonder where is the memory goes (2048-2010.93)?

Thanks. 
----- -----
date: 2010,12,1 13:07
To: 'tinnycloud'; 'George Dunlap'
CC: 'Chu Rui'; xen-devel@xxxxxxxxxxxxxxxxxxx; 'Dan Magenheimer'
Subject: re: [Xen-devel] Xen balloon driver discuss

Hi George:

        I think I know the problem, it is due to the balloon driver I used
it out of date. 
        My Guest kernel is
from(ftp://ftp.redhat.com/pub/redhat/linux/enterprise/5Server/en/os/SRPMS/,
kernel-2.6.18-164.el5.src.rpm, so as the balloon driver ) 
        
        The problem is at the very beginning, Pod Entry total is different
from the current_pages pages in balloon.
        (at the beginning, both Pod Entry and current_pages shall point to
the same value, that is total memory allocated for guest,
         But in fact, Pod Entry is 523776  <  current_pages is  514879
         So from Pod aspect, the balloon need to inflate to 523776 - target,
but the balloon driver only inflate 514879 -target
         This is the problem. 
        )

        So later I will try to get the balloon.c from xenlinux to build a
new driver, to see if solve the problem.

        Thanks.

----- -----
From: tinnycloud [mailto:tinnycloud@xxxxxxxxxxx] 
Sent: 2010.11.30 21:59
To: 'George Dunlap'
cc: 'Chu Rui'; 'xen-devel@xxxxxxxxxxxxxxxxxxx'; 'Dan Magenheimer'
Subject: re: [Xen-devel] Xen balloon driver discuss

Thank you for your kindly help. 

Well, on last mail, you mentioned that balloon will make pod_entries equal
to cache_size as soon as it start to work when guest starts up.
>From my understanding, if we start guest such as:

xm cr xxx.hvm maxmem=2048 memory=512 

then, we should set the /local/domain/did/memory/target to 522240 ( (
512M-2M) * 1204, 2M for VGA in your another patch? )
to tell the balloon driver in guest to inflate, right? And when balloon
driver balloon to let guest memory has this target,
I think pod_entires will equal to cached_size, right?

I did some experiment on this, the result shows different.

Step 1.
xm cr xxx.hvm maxmem=2048 memory=512

at the very beginning, I printed out domain tot_pages, 1320288,
pod.entry_count 523776, that is 2046M, pod.count 130560, that is 512M

(XEN) tot_pages 132088 pod_entries 523776 pod_count 130560


currently, /local/domain/did/memory/target in default will be written to
524288

after guest start up, balloon driver will balloon, when finish, I can see
pod.entry_count reduce to 23552, pod,count 14063

(XEN)     DomPage list too long to display
(XEN) Tot pages 132088  PoD entries=23552 cachesize=14063

Step 2.

In my understanding, /local/domain/did/memory/target should be at least 510
* 1024 , and then pod_entries will equal to cache_size

I use  500, So I did: xm mem-set domain_id  500

then I can see pod.entry_count reduce to 22338, pod,count 15921, still not
equal

(XEN) Memory pages belonging to domain 4:
(XEN)     DomPage list too long to display
(XEN) Tot pages 132088  PoD entries=22338 cachesize=15921

Step 3. 

Only after I did : xm mem-set domain_id  470
Pod_entries is equal to pod.count
(XEN)     DomPage list too long to display
(XEN) Tot pages 130825  PoD entries=14677 cachesize=14677

Later from the code, I learnt that those two values are forced to be equal,
in 

700 out_entry_check:
701     /* If we've reduced our "liabilities" beyond our "assets", free some
*/
702     if ( p2md->pod.entry_count < p2md->pod.count )
703     {
704         p2m_pod_set_cache_target(d, p2md->pod.entry_count);
705     }   
706


So in conclude, it looks like something goes wrong, the PoD entries should
equal to cachesize(pod.count) 
as soon as the balloon driver inflate to max - target, right? 

Many thanks.




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel