WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] [xen-unstable test] 6947: regressions - trouble: broken/

To: Jan Beulich <JBeulich@xxxxxxxxxx>
Subject: Re: [Xen-devel] [xen-unstable test] 6947: regressions - trouble: broken/fail/pass
From: Keir Fraser <keir.xen@xxxxxxxxx>
Date: Mon, 02 May 2011 14:14:15 +0100
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Mon, 02 May 2011 06:15:33 -0700
Dkim-signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=domainkey-signature:user-agent:date:subject:from:to:cc:message-id :thread-topic:thread-index:in-reply-to:mime-version:content-type :content-transfer-encoding; bh=H41c2PdK3/eQB0FMeSsrQYLHl7FwQtTrI3e51L6tA0o=; b=RByCyJWg/CEdBcvjMpej7HVh9gGrg4nF+kk2IT4elI2SXs2REky6C4i5/WdTIajl6z bkkt5PEQrY1pvlWHs21yWCjHUXyIG1BfLw0L0z5exSDDjpfhu8iW4zD+Q2/3Es7H3RIL ReMoDbXtb6i1IkoLHhk7SZgGFZYM93lGDlMIU=
Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=user-agent:date:subject:from:to:cc:message-id:thread-topic :thread-index:in-reply-to:mime-version:content-type :content-transfer-encoding; b=mRNrO4JCcQKDgwsx3z9FHhU4MPJREDrZkBmzeBKzsPommorXRvBJPd6plwCbAC7N0V TtbGPU9C6QD0KwjtVj0zP42hctRraOz6INgw+18m4Lx1+6uCNNs0ZezmzPMh7Z105R1E +R5qOfdr3oHxwC/6QeBeH6zCtGWdkYO5/Ad/s=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4DBEBFE7020000780003F29F@xxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcwIytXIPAut4LfhEE+Yoia5nR2F7w==
Thread-topic: [Xen-devel] [xen-unstable test] 6947: regressions - trouble: broken/fail/pass
User-agent: Microsoft-Entourage/12.29.0.110113
On 02/05/2011 13:29, "Jan Beulich" <JBeulich@xxxxxxxxxx> wrote:

>>>> On 02.05.11 at 14:19, Keir Fraser <keir.xen@xxxxxxxxx> wrote:
>> On 02/05/2011 13:00, "Jan Beulich" <JBeulich@xxxxxxxxxx> wrote:
>> 
>>>> (3) Restructure the interrupt code to do less work in IRQ context. For
>>>> example tasklet-per-irq, and schedule on the local cpu. Protect a bunch of
>>>> the PIRQ structures with a non-IRQ lock. Would increase interrupt latency
>>>> if
>>>> the local CPU is interrupted in hypervisor context. I'm not sure about this
>>>> one -- I'm not that happy about the amount of work now done in hardirq
>>>> context, but I'm not sure on the performance impact of deferring the work.
>>> 
>>> I'm not inclined to make changes in this area for the purpose at hand
>>> either (again, Linux gets away without this - would have to check how
>>> e.g. KVM gets the TLB flushing done, or whether they don't defer
>>> flushes like we do).
>> 
>> Oh, another way would be to make lookup_slot invocations from IRQ context be
>> RCU-safe. Then the radix tree updates would not have to synchronise on the
>> irq_desc lock? And I believe Linux has examples of RCU-safe usage of radix
> 
> I'm not sure - the patch doesn't introduce the locking (i.e. the
> translation arrays used without the patch also get updated under
> lock). I'm also not certain about slot recycling aspects (i.e. what
> would the result be if freeing slots got deferred via RCU, but the
> same slot is then needed to be used again before the grace period
> expires). Quite possibly this consideration is mute, just resulting
> from my only half-baked understanding of RCU...

The most straightforward way to convert to RCU with the most similar
synchronising semantics would be to add a 'live' boolean flag to each
pirq-related struct that is stored in a radix tree. Then:
 * insertions into radix tree would be moved before acquisition of the
irq_desc lock, then set 'live' under the lock
 * deletions would clear 'live' under the lock, then do the actual radix
deletion would happen after irq_desc lock release;
 * lookups would happen as usual under the irq_desc lock, but with an extra
test of the 'live' flag.

The main complexity of this approach would probably be in breaking up the
insertions/deletions across the irq_desc-lock critical section. Basically
the 'live' flag update would happen wherever the insertion/deletion happens
right now, but the physical insertion/deletion would be moved respectively
earlier/later.

We'd probably also need an extra lock to protect against concurrent
radix-tree update operations (should be pretty straightforward to add
however, needing to protect *only* the radix-tree update calls).

This is a pretty nice way to go imo.

 -- Keir

> Jan
> 
>> trees -- certainly Linux's radix-tree.h mentions RCU.
>> 
>> I must say this would be far more attractive to me than hacking the xmalloc
>> subsystem. That's pretty nasty.
>> 
>>  -- Keir
> 
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>