WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [xen-unstable test] 6947: regressions - trouble: broken/

To: "Keir Fraser" <keir.xen@xxxxxxxxx>
Subject: Re: [Xen-devel] [xen-unstable test] 6947: regressions - trouble: broken/fail/pass
From: "Jan Beulich" <JBeulich@xxxxxxxxxx>
Date: Mon, 02 May 2011 13:24:59 +0100
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 02 May 2011 05:25:44 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C9E45E5C.17154%keir.xen@xxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4DBEB8FA020000780003F276@xxxxxxxxxxxxxxxxxx> <C9E45E5C.17154%keir.xen@xxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>>> On 02.05.11 at 14:13, Keir Fraser <keir.xen@xxxxxxxxx> wrote:
> On 02/05/2011 13:00, "Jan Beulich" <JBeulich@xxxxxxxxxx> wrote:
> 
>>> (2) Change the xmalloc lock to spin_lock_irqsave(). This would also have to
>>> be transitively applied to at least the heap_lock in page_alloc.c. One issue
>>> with this (and indeed with calling alloc_heap_pages at all with IRQs
>>> disabled) is that alloc_heap_pages does actually assume IRQs are enabled
>>> (for example, it calls flush_tlb_mask()) -- actually I think this limitation
>>> probably predates the tsc rendezvous changes, and could be a source of
>>> latent bugs in earlier Xen releases.
>> 
>> (2b) Make only the xmalloc() lock disable IRQs, and don't allow it to
>> go into the page allocator when IRQs were disabled on entry. Have
>> a reserve page available on each pCPU (requires that in a single
>> hypercall there can't be allocations adding up to more than PAGE_SIZE),
>> and when consumed, re-fill this page e.g. from a softirq or tasklet.
> 
> You'd have to release/acquire the xmalloc lock across the ->get_mem call.

Not sure what you're trying to make me aware of - initial acquire
would be spin_lock_irqsave(), prior to ->get_mem() it would
spin_unlock_irqrestore(), and the ->get_mem() handler would be
responsible for not calling into the page allocator when interrupts
are (still) disabled (and instead use the per-CPU reserve page if
populated, triggering its re-population).

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>