[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [xen-unstable test] 6947: regressions - trouble: broken/fail/pass

  • To: Jan Beulich <JBeulich@xxxxxxxxxx>
  • From: Keir Fraser <keir.xen@xxxxxxxxx>
  • Date: Mon, 02 May 2011 14:14:15 +0100
  • Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
  • Delivery-date: Mon, 02 May 2011 06:15:33 -0700
  • Domainkey-signature: a=rsa-sha1; c=nofws; d=gmail.com; s=gamma; h=user-agent:date:subject:from:to:cc:message-id:thread-topic :thread-index:in-reply-to:mime-version:content-type :content-transfer-encoding; b=mRNrO4JCcQKDgwsx3z9FHhU4MPJREDrZkBmzeBKzsPommorXRvBJPd6plwCbAC7N0V TtbGPU9C6QD0KwjtVj0zP42hctRraOz6INgw+18m4Lx1+6uCNNs0ZezmzPMh7Z105R1E +R5qOfdr3oHxwC/6QeBeH6zCtGWdkYO5/Ad/s=
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcwIytXIPAut4LfhEE+Yoia5nR2F7w==
  • Thread-topic: [Xen-devel] [xen-unstable test] 6947: regressions - trouble: broken/fail/pass

On 02/05/2011 13:29, "Jan Beulich" <JBeulich@xxxxxxxxxx> wrote:

>>>> On 02.05.11 at 14:19, Keir Fraser <keir.xen@xxxxxxxxx> wrote:
>> On 02/05/2011 13:00, "Jan Beulich" <JBeulich@xxxxxxxxxx> wrote:
>>>> (3) Restructure the interrupt code to do less work in IRQ context. For
>>>> example tasklet-per-irq, and schedule on the local cpu. Protect a bunch of
>>>> the PIRQ structures with a non-IRQ lock. Would increase interrupt latency
>>>> if
>>>> the local CPU is interrupted in hypervisor context. I'm not sure about this
>>>> one -- I'm not that happy about the amount of work now done in hardirq
>>>> context, but I'm not sure on the performance impact of deferring the work.
>>> I'm not inclined to make changes in this area for the purpose at hand
>>> either (again, Linux gets away without this - would have to check how
>>> e.g. KVM gets the TLB flushing done, or whether they don't defer
>>> flushes like we do).
>> Oh, another way would be to make lookup_slot invocations from IRQ context be
>> RCU-safe. Then the radix tree updates would not have to synchronise on the
>> irq_desc lock? And I believe Linux has examples of RCU-safe usage of radix
> I'm not sure - the patch doesn't introduce the locking (i.e. the
> translation arrays used without the patch also get updated under
> lock). I'm also not certain about slot recycling aspects (i.e. what
> would the result be if freeing slots got deferred via RCU, but the
> same slot is then needed to be used again before the grace period
> expires). Quite possibly this consideration is mute, just resulting
> from my only half-baked understanding of RCU...

The most straightforward way to convert to RCU with the most similar
synchronising semantics would be to add a 'live' boolean flag to each
pirq-related struct that is stored in a radix tree. Then:
 * insertions into radix tree would be moved before acquisition of the
irq_desc lock, then set 'live' under the lock
 * deletions would clear 'live' under the lock, then do the actual radix
deletion would happen after irq_desc lock release;
 * lookups would happen as usual under the irq_desc lock, but with an extra
test of the 'live' flag.

The main complexity of this approach would probably be in breaking up the
insertions/deletions across the irq_desc-lock critical section. Basically
the 'live' flag update would happen wherever the insertion/deletion happens
right now, but the physical insertion/deletion would be moved respectively

We'd probably also need an extra lock to protect against concurrent
radix-tree update operations (should be pretty straightforward to add
however, needing to protect *only* the radix-tree update calls).

This is a pretty nice way to go imo.

 -- Keir

> Jan
>> trees -- certainly Linux's radix-tree.h mentions RCU.
>> I must say this would be far more attractive to me than hacking the xmalloc
>> subsystem. That's pretty nasty.
>>  -- Keir
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.