WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] re: Xen balloon driver discuss

HI Dan:

 

         Appreciate for your presentation in summarizing the memory overcommit, really vivid and in great help.

         Well, I guess recently days the strategy in my mind will fall into the solution Set C in pdf.

 

         The tmem solution your worked out for memory overcommit is both efficient and effective.

         I guess I will have a try on Linux Guest.

 

         The real situation I have is most of the running VMs on host are windows. So I had to come up those policies to balance the memory.

         Although policies are all workload dependent. Good news is host workload  is configurable, and not very heavy

So I will try to figure out some favorable policy. The policies referred in pdf are good start for me.

 

         Today, instead of trying to implement “/proc/meminfo” with shared pages, I hacked the balloon driver to have another

         workqueue periodically write meminfo into xenstore through xenbus, which solve the problem of xenstrore high CPU

         utilization  problem.

 

         Later I will try to google more on how Citrix does.

         Thanks for your help, or do you have any better idea for windows guest?

        

 

Sent: Dan Magenheimer [mailto:dan.magenheimer@xxxxxxxxxx]
Date: 2010.11.23 1:47
To: MaoXiaoyun; xen devel
CC: george.dunlap@xxxxxxxxxxxxx
Subject: RE: Xen balloon driver discuss

 

Xenstore IS slow and you could improve xenballoond performance by only sending the single CommittedAS value from xenballoond in domU to dom0 instead of all of /proc/meminfo.   But you are making an assumption that getting memory utilization information from domU to dom0 FASTER (e.g. with a shared page) will provide better ballooning results.  I have not found this to be the case, which is what led to my investigation into self-ballooning, which led to Transcendent Memory.  See the 2010 Xen Summit for more information.

 

In your last paragraph below “Regards balloon strategy”, the problem is it is not easy to define “enough memory” and “shortage of memory” within any guest and almost impossible to define it and effectively load balance across many guests.  See my Linux Plumber’s Conference presentation (with complete speaker notes) here:

 

http://oss.oracle.com/projects/tmem/dist/documentation/presentations/MemMgmtVirtEnv-LPC2010-Final.pdf

 

http://oss.oracle.com/projects/tmem/dist/documentation/presentations/MemMgmtVirtEnv-LPC2010-SpkNotes.pdf

 

From: MaoXiaoyun [mailto:tinnycloud@xxxxxxxxxxx]
Sent: Sunday, November 21, 2010 9:33 PM
To: xen devel
Cc: Dan Magenheimer; george.dunlap@xxxxxxxxxxxxx
Subject: RE: Xen balloon driver discuss

 

 
Since currently /cpu/meminfo is sent to domain 0
 via xenstore, which in my opinoin is slow.
What
 I want to do is: there is a shared page between domU and dom0, and domU periodically
update the meminfo into the page, while on the other side dom0 retrive the updated data for
caculating
 the target, which is used by guest for balloning.
 
The problem I met is,
 currently I don't know how to implement a shared page between 
dom0 and domU.
Would it like dom 0 alloc a unbound event and wait
 guest to connect, and transfer date through
grant
 table?
Or someone has more
 efficient way?
many thanks.
 
> From: tinnycloud@xxxxxxxxxxx
> To: xen-devel@xxxxxxxxxxxxxxxxxxx
> CC: dan.magenheimer@xxxxxxxxxx; George.Dunlap@xxxxxxxxxxxxx
> Subject: Xen balloon driver discuss
> Date: Sun, 21 Nov 2010 14:26:01 +0800
>
> Hi:
> Greeting first.
>
> I was trying to run about 24 HVMS (currently only Linux, later will
> involve Windows) on one physical server with 24GB memory, 16CPUs.
> Each VM is configured with 2GB memory, and I reserved 8GB memory for
> dom0.
> For safety reason, only domain U's memory is allowed to balloon.
>
> Inside domain U, I used xenballooned provide by xensource,
> periodically write /proc/meminfo into xenstore in dom
> 0(/local/domain/did/memory/meminfo).
> And in domain 0, I wrote a python script to read the meminfo, like
> xen provided strategy, use Committed_AS to calculate the domain U balloon
> target.
> The time interval is ! 1 seconds.
>
> Inside each VM, I setup a apache server for test. Well, I'd
> like to say the result is not so good.
> It appears that too much read/write on xenstore, when I give some of
> the stress(by using ab) to guest domains,
> the CPU usage of xenstore is up to 100%. Thus the monitor running in
> dom0 also response quite slowly.
> Also, in ab test, the Committed_AS grows very fast, reach to maxmem
> in short time, but in fact the only a small amount
> of memory guest really need, so I guess there should be some more to
> be taken into consideration for ballooning.
>
> For xenstore issue, I first plan to wrote a C program inside domain
> U to replace xenballoond to see whether the situation
> will be refined. If not, how about set up event channel directly for
> domU and dom0, would it be faster?
>
> Regards balloon strategy, I would do like this, when there ! are
> enough memory , just fulfill the guest balloon request, and when shortage
> of memory, distribute memory evenly on the guests those request
> inflation.
>
> Does anyone have better suggestion, thanks in advance.
>

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel