|
|
|
|
|
|
|
|
|
|
xen-devel
[Xen-devel] Re: [PATCH 1/5] xen: events: use irq_alloc_desc(_at) instead
To: |
Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> |
Subject: |
[Xen-devel] Re: [PATCH 1/5] xen: events: use irq_alloc_desc(_at) instead of open-coding an IRQ allocator. |
From: |
Ian Campbell <Ian.Campbell@xxxxxxxxxxxxx> |
Date: |
Mon, 25 Oct 2010 19:02:16 +0100 |
Cc: |
Jeremy Fitzhardinge <jeremy@xxxxxxxx>, "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Stefano Stabellini <Stefano.Stabellini@xxxxxxxxxxxxx>, "linux-kernel@xxxxxxxxxxxxxxx" <linux-kernel@xxxxxxxxxxxxxxx>, "H. Peter Anvin" <hpa@xxxxxxxxx>, "mingo@xxxxxxx" <mingo@xxxxxxx>, "tglx@xxxxxxxxxxxxx" <tglx@xxxxxxxxxxxxx> |
Delivery-date: |
Mon, 25 Oct 2010 11:03:20 -0700 |
Envelope-to: |
www-data@xxxxxxxxxxxxxxxxxxx |
In-reply-to: |
<20101025173522.GA5590@xxxxxxxxxxxx> |
List-help: |
<mailto:xen-devel-request@lists.xensource.com?subject=help> |
List-id: |
Xen developer discussion <xen-devel.lists.xensource.com> |
List-post: |
<mailto:xen-devel@lists.xensource.com> |
List-subscribe: |
<http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe> |
List-unsubscribe: |
<http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe> |
Organization: |
Citrix Systems, Inc. |
References: |
<1288023736.11153.40.camel@xxxxxxxxxxxxxxxxxxxxxx> <1288023813-31989-1-git-send-email-ian.campbell@xxxxxxxxxx> <20101025173522.GA5590@xxxxxxxxxxxx> |
Sender: |
xen-devel-bounces@xxxxxxxxxxxxxxxxxxx |
On Mon, 2010-10-25 at 18:35 +0100, Konrad Rzeszutek Wilk wrote:
> So I am curious what the /proc/interrupts looks?The issue (and the reason
> for this implementation above) was that under PV with PCI devices we would
> overlap PCI devices IRQs with Xen event channels. So we could have a USB
> device
> at IRQ 16 _and_ also a xen_spinlock4 handler. That would throw off the system
> since the xen_spinlock4 was an edge type handler while the USB device was an
> level (at least on my box).
I suspect what we should really be doing is to segregate the different
classes of event channel in IRQ space. I _think_ this new stuff is happy
with a discontinuous (but presumably clustered) IRQ space, I should
probably check.
e.g. regular interdomain event channels, VIRQs and the like should
probably request allocations from some range higher than nr_hw_irqs,
thus avoiding conflicts with hardware PIRQ event channels which would
ask for a 1-1 mapping with the GSI (i.e. same interrupt numbers as the
device would get under native, AIUI).
We might even decide to start the interdomain event channel range even
higher than nr_hw_irqs in order to leave room for the more dynamic h/w
PIRQs (e.g. MSIs) just after nr_hw_irqs. Assuming this is consistent
with what would happen on native then it is probably worthwhile.
> But with this shinny sparse_irq rework, maybe this is not an issue anymore?
> Can we mix a level and edge chip handler under one IRQ?
I doubt the sparse irq rework had any impact on this aspect, but it does
help us more easily arrange for them not to be shared in that way in the
first place.
> What do you see when you pass in a PCI device and say give the guest 32 CPUs??
I can try tomorrow and see, based on what you say above without
implementing what I described I suspect the answer will be "carnage".
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|