WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Xen 4.0.1 "xc_map_foreign_batch: mmap failed: Cannot all

To: "Keir Fraser" <keir@xxxxxxx>
Subject: Re: [Xen-devel] Xen 4.0.1 "xc_map_foreign_batch: mmap failed: Cannot allocate memory"
From: "Jan Beulich" <JBeulich@xxxxxxxxxx>
Date: Fri, 17 Dec 2010 09:22:56 +0000
Cc: anthony.perard@xxxxxxxxxx, Charles Arnold <CARNOLD@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx, Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>, Stefano Stabellini <stefano.stabellini@xxxxxxxxxxxxx>
Delivery-date: Fri, 17 Dec 2010 01:23:43 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C9302D00.D25D%keir@xxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4D0A17C6020000910006886E@xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx> <C9302D00.D25D%keir@xxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>>> On 16.12.10 at 21:54, Keir Fraser <keir@xxxxxxx> wrote:
> On 16/12/2010 20:44, "Charles Arnold" <carnold@xxxxxxxxxx> wrote:
> 
>>>> On 12/16/2010 at 01:33 PM, in message <C9302813.2966F%keir@xxxxxxx>, Keir
>> Fraser <keir@xxxxxxx> wrote:
>>> On 16/12/2010 19:23, "Charles Arnold" <carnold@xxxxxxxxxx> wrote:
>>> 
>>>> The bug is that qemu-dm seems to make the assumption that it can mmap from
>>>> dom0 all the memory with which the guest has been defined instead of the
>>>> memory
>>>> that is actually available on the host.
>>> 
>>> 32-bit dom0? Hm, I thought the qemu mapcache was supposed to limit the total
>>> amount of guest memory mapped at one time, for a 32-bit qemu. For 64-bit
>>> qemu I wouldn't expect to find a limit as low as 3.25G.
>> 
>> Sorry, I should have specified that it is a 64 bit dom0 / hypervisor.
> 
> Okay, well I'm not sure what limit qemu-dm is hitting then. Mapping 3.25G of
> guest memory will only require a few megabytes of pagetables for the qemu
> process in dom0. Perhaps there is a ulimit or something set on the qemu
> process?

I don't think a ulimit plays in here - if Dom0 is given more memory,
qemu-dm won't fail. The question really is why qemu-dm actually
uses unbounded mmap()-s in the first place. Even if the memory
went only into page tables (which I doubt), this is a scalability
problem.

Of course, e.g. an address space ulimit put on qemu-dm should
also not cause any failure - clearly there's a lack of error handling
here.

> If we can work out and detect this limit, perhaps 64-bit qemu-dm could have
> a mapping cache similar to 32-bit qemu-dm, limited to some fraction of the
> detected mapping limit. And/or, on mapping failure, we could reclaim
> resources by simply zapping the existing cached mappings. Seems there's a
> few options. I don't really maintain qemu-dm myself -- you might get some
> help from Ian Jackson, Stefano, or Anthony Perard if you need more advice.

Looking forward to their comments.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel