[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Xen-devel] [PATCH] linux/balloon: don't allowballooningdowna domain below a reasonable limit



> I was planning on providing both Model C and Model D (see below),
> but let me know if you will only accept Model C (or even Model B)
> and I will adjust accordingly.

I think all these models are wrong :-)

'free' guest memory is often serving useful purposes such as acting as a
buffer cache etc, so ballooning it out unnecessarily is probably not a
good thing. Model D might work better if we had a way of giving up
memory in a way that wasn't 'final' i.e. we could surrender pages back
to xen, but would get a ticket with which we could ask Xen if it still
had the page, and if xen hadn't zeroed them and handed them to someone
else we could get the original page back. Hence, we could treat pages
handed back to xen as a kind of 'unreliable swap device'. 

Even if we had such extensions, I'm not sure that having every domain
eagerly surrender memory to xen is necessarily the best approach. It may
be better to have domains just indicate to domain0 whether they are in a
position to release memory, or whether they could actively benefit from
more, and then have domain0 act as arbiter.

Ian

 
> ===============
> MODEL A (current):
> 
> Domain 0 sez: "Hey guest A, I have no clue how much memory you have
> (though you may or may not have obeyed a previous request) or how much
> you need, but change your memory usage to 150MB"
> Guest A (silently): "(Silly domain 0 wants me to reduce my memory
> usage to 150MB but my minimum is 160MB.  Well, I guess I'll do
> my best.)"
> ===============
> MODEL B (guest provides info when prodded):
> 
> Domain 0 sez: "Hey guest A, tell me how much memory you have and how
> much you need"
> Guest A sez: "I have 198MB but I really only need 129MB"
> Domain 0 sez: "Guest A, reduce your memory usage to 129MB"
> Guest A (silently): "(My min is 150MB but I'll do my best)"
> Domain 0 sez: "Hey guest A, tell me how much memory you have and how
> much you need"
> Guest A sez: "I have 150MB but I really only need 129MB"
> [etc]
> ===============
> MODEL C (guest provides info regularly):
> 
> Guest A sez: "I have 198 MB, I really only need 180MB, and my
> minimum is 150MB.  I'll provide another update in a second."
> [one second later]
> Guest A sez: "I have 198 MB, I really only need 129MB, and my
> minimum is 150MB.  I'll provide another update in a second."
> Domain 0 sez: "Guest A, reduce your memory to 150MB"
> Guest A (silently): "(ballooning down now to 150MB)"
> [one second later]
> Guest A sez: "I have 150MB, I really need 250MB and my minimum
> is 150MB. I'll provide another update in a second."
> Domain 0 sez: "Guest A, increase your memory to 250MB"
> ===============
> MODEL D (autoballooning):
> 
> Domain 0 sez: "Hey Guest A, do the right thing with your memory"
> Guest A sez: "I have 198MB, I really only need 129MB, and my
> minimum is 150MB"
> Guest A (silently): "(ballooning down now to 150MB)"
> [one second later]
> Guest A sez: "I have 150MB, I really need 250MB, and my
> minimum is 150MB"
> Guest A (silently): "(ballooning up now to 250MB... oops looks
> like I can't get that much but I'll take what I can get)"
> [one second later]
> Guest A sez: "I have 200MB, I really need 300MB, and my
> minimum is 150MB"
> Guest A (silently): "(ballooning up now to 250MB... oops looks
> like I can't get any more... time to start swapping)"
> 
> 
> ===================================
> Thanks... for the memory
> I really could use more / My throughput's on the floor
> The balloon is flat / My swap disk's fat / I've O-O-M's in store
> Overcommitted we are
> (with apologies to the late great Bob Hope)
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.