This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] [PATCH Xen-unstable] Balloon down memory to achive enoug

To: "Konrad Rzeszutek Wilk" <konrad.wilk@xxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH Xen-unstable] Balloon down memory to achive enough DMA32 memory for PV guests with PCI pass-through to succesfully launch.
From: "Jan Beulich" <JBeulich@xxxxxxxxxx>
Date: Mon, 16 Nov 2009 15:31:10 +0000
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, keir.fraser@xxxxxxxxxxxxx
Delivery-date: Mon, 16 Nov 2009 07:31:33 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20091116150439.GB30967@xxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20091113221602.GA30243@xxxxxxxxxxxxxxxxxxx> <4B01314F020000780001FDE4@xxxxxxxxxxxxxxxxxx> <20091116150439.GB30967@xxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>>> Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> 16.11.09 16:04 >>>
>On Mon, Nov 16, 2009 at 10:02:39AM +0000, Jan Beulich wrote:
>> For one, it is logically (just for a much smaller total amount and for a more
>> narrow memory range) identical to what would be needed for 32-bit-pv
>> DomU-s on 64-bit hv, so *if* this patch is considered conceptually valid,
>Meaning if you want to run a 8GB 32-bit PV, do the same. But if the PV is
>say, using 512MB, there is no need to allocate 64MB?

No, you probably misunderstood (and I probably implied to much in my
response): On a system with more than 168G, just ballooning out
arbitrary memory from Dom0 in order to start a 32-bit pv DomU won't
guarantee that the domain can actually start, as memory beyond the
128G boundary is unusable there for such guests.

Conceptually, ballooning out arbitrary amounts of memory to find a
certain amount below 4G is identical to ballooning out more than the
amount a guest needs in order to find as much as it needs below 128G.

>> then it should be abstracted and used for both purposes.
>> I think, however, that it is conceptually wrong, because it may mean that
>> all of the memory possibly removable from Dom0 can get ballooned out
>> without in fact yielding the memory needed by the to be started DomU.
>There is a limit at which it stops. Perhaps I should add a failsafe
>wherein if we don't get enough of the memory, we give it back to Dom0?

That would reduce the risk for Dom0, yes, but it doesn't eliminate it (and
I am of the opinion that especially as long as Dom0 is not restartable we
have to avoid putting any sort of extra risk on it).

>> Besides that, hard-coding the value to 64Mb doesn't seem very nice
>> either (while I realize that both 2.6.18 and pv-ops default to 64Mb, I
>> do not think this is really appropriate, especially given that in the 2.6.18
>> tree Dom0 can get run with as little as 2Mb, and I highly doubt that the
>> demand of a DomU can by default be assumed to exceed that of Dom0),
>> and in particular doesn't help with the case where one really has to use
>> a larger than the default size swiotlb.
>Sure. But the user will get a notice in the log pointing them to the fact that
>we could not get enough memory. Maybe I should expand it some more and say
>something along these lines: "You best bet is to use dom0_mem=2GB. We've tried
>to deflate the amount of memory the privileged domain is using, but we fear
>to go any lower. Your guest might not start."
>Since the SWIOTLB size is determined by the 'swiotlb' argument passed to
>the guest, what if we scanned for that and if it has a number, calculate
>how much memory that amounts to and use that value? The default still being at 

That might be an option, but is very Linux-centric. I think the amount should
be configurable per guest if something like this is being done at all.


Xen-devel mailing list