This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


RE: [Xen-devel] [PATCH 0/6] x86: break up post-boot non-order-zero alloc

To: Jan Beulich <JBeulich@xxxxxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: RE: [Xen-devel] [PATCH 0/6] x86: break up post-boot non-order-zero allocations
From: Dan Magenheimer <dan.magenheimer@xxxxxxxxxx>
Date: Tue, 5 Apr 2011 19:50:51 -0700 (PDT)
Delivery-date: Tue, 05 Apr 2011 19:51:40 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4D9AECA60200007800039F26@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4D9AECA60200007800039F26@xxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thanks very much Jan for making forward progress on this!

A couple things:

IIRC, PCI-passthrough is another user of multi-page allocations.

At some point, does it make sense to eliminate the multi-page
allocation functionality, at least through the "normal"
page allocation routines, and add a:

/* documentation about why not to use this */

call so that it is very explicit when future page allocation
users "regress" by adding a multi-page allocation request?

Thanks again!

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxxxx]
> Sent: Tuesday, April 05, 2011 2:19 AM
> To: xen-devel@xxxxxxxxxxxxxxxxxxx
> Subject: [Xen-devel] [PATCH 0/6] x86: break up post-boot non-order-zero
> allocations
> While tmem is most affected by this, due to fragmentation it is
> generally
> a bad idea to require runtime allocations of more than a single page in
> size.
> 1: remove direct cpumask_t members from struct vcpu and struct domain
> 2: x86: split struct vcpu
> 3: x86: move pv-only members of struct vcpu to struct pv_vcpu
> 4: x86: split struct domain
> 5: x86: introduce alloc_vcpu_guest_context()
> 6: passthrough: use domain pirq as index of struct hvm_irq_dpci's
> hvm_timer array
> With this, structure sizes are below page size, and no longer depend
> significantly on NR_CPUS. This series, however, doesn't eliminate
> all non-order-zero allocations that happen post boot (i.e. mostly
> during domain creation). Items that are known to need addressing
> are
> - nr_irqs-sized allocation of ->arch.irq_pirq[] in
>   xen/arch/x86/domain.c:arch_domain_create()
> - ->nr_pirqs-sized allocations in
>   xen/drivers/passthrough/io.c:pt_irq_create_bind_vtd()
> - ->nr_pirqs-sized allocation of ->arch.pirq_irq[] in
>   xen/arch/x86/domain.c:arch_domain_create()
> - ->nr_pirqs-sized allocation of ->pirq_to_evtchn[] in
>   xen/common/domain.c:domain_create()
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxxxx>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

Xen-devel mailing list