WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [PATCH] x86-64: don't use xmalloc_array() for allocation

To: Jan Beulich <JBeulich@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: Re: [Xen-devel] [PATCH] x86-64: don't use xmalloc_array() for allocation of the (per-CPU) IDTs
From: Keir Fraser <keir@xxxxxxx>
Date: Thu, 13 Oct 2011 08:50:53 +0100
Cc:
Delivery-date: Thu, 13 Oct 2011 00:52:08 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=sender:user-agent:date:subject:from:to:message-id:thread-topic :thread-index:in-reply-to:mime-version:content-type :content-transfer-encoding; bh=Lvo+rIMDb4d5Rh4M7pyBwBeZmbFNrm+zUDJH8Ip+t4U=; b=ZDrjUsO/PFNlitOKLTlcPCYI6ilGyk3zxJZDRRv+z9o5IC6JuP3er8CDUXjpKtlHYr 2+vS7OYPEEJ3g2pYPLjeVC/6qtLFXYiUoFIOCDKRbwrPlnKs83d/cH9uGUJFYjbCymLS IEAIZU0AbSDazBDkMV5gVPJEfi0Vtcpla+D+8=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4E96ADA3020000780005B093@xxxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcyJfNUJHvFSXjMQm0aMwAU/cZA69g==
Thread-topic: [Xen-devel] [PATCH] x86-64: don't use xmalloc_array() for allocation of the (per-CPU) IDTs
User-agent: Microsoft-Entourage/12.30.0.110427
On 13/10/2011 08:21, "Jan Beulich" <JBeulich@xxxxxxxx> wrote:

> The IDTs being exactly a page in size, using xmalloc() here is rather
> inefficient, as this requires double the amount to be allocated (with
> almost an entire page wasted). For hot plugged CPUs, this at once
> eliminates one more non-order-zero runtime allocation.
> 
> For x86-32, however, the IDT is exactly half a page, so allocating a
> full page seems wasteful here, so it continues to use xmalloc() as
> before.
> 
> With most of the affected functions' bodies now being inside #ifdef-s,
> it might be reasonable to split those parts out into subarch-specific
> code...

Well, there is also opportunity for code merging. There is generally no
reason for x86-64 to use alloc_domheap_pages where x86-32 uses
alloc_xenheap_pages (e.g., for allocating per_cpu(gdt_table)) -- both
architectures can use alloc_xenheap_pages in this case.

Also we might get rid of some further ifdef'ery if we added an alloc/free
interface where *both* functions take a size parameter. We could then decide
whether to use xmalloc or alloc_xenheap_pages dynamically based on that
parameter. Knowing size on free would allow us to easily free such
allocations too.

Finally, I'm not sure about more x86-64 code movement: I'd like to kill
x86-32 entirely really.

Anyway, despite all this...

> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>

Acked-by: Keir Fraser <keir@xxxxxxx>

> --- a/xen/arch/x86/smpboot.c
> +++ b/xen/arch/x86/smpboot.c
> @@ -639,9 +639,6 @@ static void cpu_smpboot_free(unsigned in
>  {
>      unsigned int order;
>  
> -    xfree(idt_tables[cpu]);
> -    idt_tables[cpu] = NULL;
> -
>      order = get_order_from_pages(NR_RESERVED_GDT_PAGES);
>  #ifdef __x86_64__
>      if ( per_cpu(compat_gdt_table, cpu) )
> @@ -650,10 +647,15 @@ static void cpu_smpboot_free(unsigned in
>          free_domheap_pages(virt_to_page(per_cpu(compat_gdt_table, cpu)),
>                             order);
>      per_cpu(compat_gdt_table, cpu) = NULL;
> +    order = get_order_from_bytes(IDT_ENTRIES * sizeof(**idt_tables));
> +    if ( idt_tables[cpu] )
> +        free_domheap_pages(virt_to_page(idt_tables[cpu]), order);
>  #else
>      free_xenheap_pages(per_cpu(gdt_table, cpu), order);
> +    xfree(idt_tables[cpu]);
>  #endif
>      per_cpu(gdt_table, cpu) = NULL;
> +    idt_tables[cpu] = NULL;
>  
>      if ( stack_base[cpu] != NULL )
>      {
> @@ -691,19 +693,25 @@ static int cpu_smpboot_alloc(unsigned in
>      if ( !page )
>          goto oom;
>      per_cpu(gdt_table, cpu) = gdt = page_to_virt(page);
> +    order = get_order_from_bytes(IDT_ENTRIES * sizeof(**idt_tables));
> +    page = alloc_domheap_pages(NULL, order,
> +                               MEMF_node(cpu_to_node(cpu)));
> +    if ( !page )
> +        goto oom;
> +    idt_tables[cpu] = page_to_virt(page);
>  #else
>      per_cpu(gdt_table, cpu) = gdt = alloc_xenheap_pages(order, 0);
>      if ( !gdt )
>          goto oom;
> +    idt_tables[cpu] = xmalloc_array(idt_entry_t, IDT_ENTRIES);
> +    if ( idt_tables[cpu] == NULL )
> +        goto oom;
>  #endif
>      memcpy(gdt, boot_cpu_gdt_table,
>             NR_RESERVED_GDT_PAGES * PAGE_SIZE);
>      BUILD_BUG_ON(NR_CPUS > 0x10000);
>      gdt[PER_CPU_GDT_ENTRY - FIRST_RESERVED_GDT_ENTRY].a = cpu;
>  
> -    idt_tables[cpu] = xmalloc_array(idt_entry_t, IDT_ENTRIES);
> -    if ( idt_tables[cpu] == NULL )
> -        goto oom;
>      memcpy(idt_tables[cpu], idt_table,
>             IDT_ENTRIES*sizeof(idt_entry_t));
>  
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel