This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] [xen-unstable test] 6947: regressions - trouble: broken/

To: "Keir Fraser" <keir.xen@xxxxxxxxx>
Subject: Re: [Xen-devel] [xen-unstable test] 6947: regressions - trouble: broken/fail/pass
From: "Jan Beulich" <JBeulich@xxxxxxxxxx>
Date: Tue, 03 May 2011 10:35:34 +0100
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 03 May 2011 02:36:45 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <C9E45FF1.17156%keir.xen@xxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4DBEB8FA020000780003F276@xxxxxxxxxxxxxxxxxx> <C9E45FF1.17156%keir.xen@xxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>>> On 02.05.11 at 14:19, Keir Fraser <keir.xen@xxxxxxxxx> wrote:
> On 02/05/2011 13:00, "Jan Beulich" <JBeulich@xxxxxxxxxx> wrote:
>>> (3) Restructure the interrupt code to do less work in IRQ context. For
>>> example tasklet-per-irq, and schedule on the local cpu. Protect a bunch of
>>> the PIRQ structures with a non-IRQ lock. Would increase interrupt latency if
>>> the local CPU is interrupted in hypervisor context. I'm not sure about this
>>> one -- I'm not that happy about the amount of work now done in hardirq
>>> context, but I'm not sure on the performance impact of deferring the work.
>> I'm not inclined to make changes in this area for the purpose at hand
>> either (again, Linux gets away without this - would have to check how
>> e.g. KVM gets the TLB flushing done, or whether they don't defer
>> flushes like we do).
> Oh, another way would be to make lookup_slot invocations from IRQ context be
> RCU-safe. Then the radix tree updates would not have to synchronise on the
> irq_desc lock? And I believe Linux has examples of RCU-safe usage of radix
> trees -- certainly Linux's radix-tree.h mentions RCU.
> I must say this would be far more attractive to me than hacking the xmalloc
> subsystem. That's pretty nasty.

I think that I can actually get away with two stage insertion/removal
without needing RCU, based on the fact that prior to these changes
we have the translation arrays also hold zero values that mean "does
not have a valid translation". Hence I can do tree insertion (removal)
with just d->event_lock held, but data not yet (no longer) populated,
and valid <-> invalid transitions only happening with the IRQ's
descriptor lock held (and interrupts disabled). All this requires is that
readers properly deal with the non-populated state, which they
already had to in the first version of the patch anyway.


Xen-devel mailing list