WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Re: [PATCH RFC] x86/acpi: don't ignore I/O APICs justbec

To: Yinghai Lu <yhlu.kernel@xxxxxxxxx>
Subject: Re: [Xen-devel] Re: [PATCH RFC] x86/acpi: don't ignore I/O APICs justbecause there's no local APIC
From: ebiederm@xxxxxxxxxxxx (Eric W. Biederman)
Date: Fri, 19 Jun 2009 22:40:44 -0700
Cc: Jeremy Fitzhardinge <jeremy@xxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxx>, the arch/x86 maintainers <x86@xxxxxxxxxx>, Linux Kernel Mailing List <linux-kernel@xxxxxxxxxxxxxxx>, Jan Beulich <JBeulich@xxxxxxxxxx>, Ingo Molnar <mingo@xxxxxxxxxx>, "H. Peter Anvin" <hpa@xxxxxxxxx>, Thomas Gleixner <tglx@xxxxxxxxxxxxx>, Len Brown <lenb@xxxxxxxxxx>
Delivery-date: Fri, 19 Jun 2009 22:41:25 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <86802c440906192058v78746acft161d74720c01a6a7@xxxxxxxxxxxxxx> (Yinghai Lu's message of "Fri\, 19 Jun 2009 20\:58\:24 -0700")
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4A329CF8.4050502@xxxxxxxx> <alpine.LFD.2.00.0906181206190.4213@xxxxxxxxxxxxxxxxxxxxx> <4A3A9220.4070807@xxxxxxxx> <m1zlc5jqac.fsf@xxxxxxxxxxxxxxxxx> <4A3A99FB.7070807@xxxxxxxx> <m1vdmtgtt2.fsf@xxxxxxxxxxxxxxxxx> <4A3AC0C4.6060508@xxxxxxxx> <86802c440906182232r31088e4fh3613a8da6f8903f7@xxxxxxxxxxxxxx> <4A3B5FCD0200007800006AC0@xxxxxxxxxxxxxxxxxx> <m1my84bpuq.fsf@xxxxxxxxxxxxxxxxx> <86802c440906192058v78746acft161d74720c01a6a7@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Gnus/5.11 (Gnus v5.11) Emacs/22.2 (gnu/linux)
Yinghai Lu <yhlu.kernel@xxxxxxxxx> writes:

> On Fri, Jun 19, 2009 at 1:16 AM, Eric W. Biederman<ebiederm@xxxxxxxxxxxx> 
> wrote:
>> "Jan Beulich" <JBeulich@xxxxxxxxxx> writes:
>>
>>>>>> Yinghai Lu <yhlu.kernel@xxxxxxxxx> 19.06.09 07:32 >>>
>>>>doesn't XEN support per cpu irq vector?
>>>
>>> No.
>>>
>>>>got sth from XEN 3.3 / SLES 11
>>>>
>>>>igb 0000:81:00.0: PCI INT A -> GSI 95 (level, low) -> IRQ 95
>>>>igb 0000:81:00.0: setting latency timer to 64
>>>>igb 0000:81:00.0: Intel(R) Gigabit Ethernet Network Connection
>>>>igb 0000:81:00.0: eth9: (PCIe:2.5Gb/s:Width x4) 00:21:28:3a:d8:0e
>>>>igb 0000:81:00.0: eth9: PBA No: ffffff-0ff
>>>>igb 0000:81:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s)
>>>>vendor=8086 device=3420
>>>>(XEN) irq.c:847: dom0: invalid pirq 94 or vector -28
>>>>igb 0000:81:00.1: PCI INT B -> GSI 94 (level, low) -> IRQ 94
>>>>igb 0000:81:00.1: setting latency timer to 64
>>>>(XEN) physdev.c:87: dom0: map irq with wrong vector -28
>>>>map irq failed
>>>>(XEN) physdev.c:87: dom0: map irq with wrong vector -28
>>>>map irq failed
>>>>
>>>>the system need a lot of MSI-X normally.. with current mainline tree
>>>>kernel, it will need about 360 irq...
>>>
>>> Do you mean 360 connected devices, or just 360 IO-APIC pins (most of
>>> which are usually unused)? In the latter case, devices using MSI (i.e. not
>>> using high numbered IO-APIC pins) should work, while devices connected
>>> to IO-APIC pins numbered 256 and higher won't work in SLE11 as-is.
>>> This limitation got fixed recently in the 3.5-unstable tree, though. The
>>> 256 active vectors limit, however, continues to exist, so the former case
>>> would still not be supported by Xen.
>
> 5 io-apic controllers, so total pins like 5x24
>
>>
>> Good question.  I know YH had a system a few years ago that exceeded 256 
>> vectors.
> that was in SimNow.
>
> This time is real.
> think about system: 24 pcie cards and every one has two functions. and
> one function will use 16 or 20 MSIX
> like 24 * 2 * 16

I'm not too surprised.  I saw the writing on the wall when I implement
per irq vector, and MSIX was one the likely candidates.

I'm curious what kind of pcie card do you have plugged in?  Looks like
you have a irq or two per cpu.

Eric

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>