[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: "ACPI: Unable to start the ACPI Interpreter"



On Mon, Jun 27, 2011 at 11:12:43PM +0800, Liwei wrote:
> On 27 June 2011 21:39, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> wrote:
> On Sat, Jun 25, 2011 at 08:33:04PM +0800, Liwei wrote:
> >> Just a follow up. I found out that the boot failure's due to a
> >> unsupported (faulty?) PATA controller. Removed it and the system
> >> actually boots. The SCI allocation failure still occurs though.
> >
> > That in general causes the ACPI interpreter to stop working
> > completly.
> 
> Does failure of the interpreter cause anything bad to happen? Common

Well yes. It can't interpret the ACPI _PRT tables so the
interrupt routing information is not present.

Which means that the drivers fall back to polling mode or end up
using the wrong IRQs.

> sense tells me that things should be going very wrong, but the system
> does come up. I can ssh in, run hvm domains, etc. The only problem I
> see though is that interrupt mapping for certain PCI passthrough
> devices (particularly one of the two Intel EHCI USB controllers and a
> firewire controller) fails. VGA passthrough with or without
> gfx_passthru = 1 still works fine though:
> 
> #xl dmesg
> (XEN) physdev.c:164: dom2: 10:-1 already mapped to 16
> (XEN) irq.c:1297:d0 Cannot bind IRQ 16 to guest. Others do not share.
> (XEN) domctl.c:914:d0 pt_irq_create_bind failed!
> (XEN) irq.c:1297:d0 Cannot bind IRQ 20 to guest. Others do not share.
> (XEN) domctl.c:914:d0 pt_irq_create_bind failed!
> 
> #cat qemu-dm-vm.log
> IRQ type = MSI-INTx
> dm-command: hot insert pass-through pci dev
> register_real_device: Assigning real physical device 00:1a.0 ...
> pt_iomul_init: Error: pt_iomul_init can't open file
> /dev/xen/pci_iomul: No such file or directory: 0x0:0x1a.0x0
> pt_register_regions: IO region registered (size=0x00001000 
> base_addr=0xa0004000)
> pci_intx: intx=1
> register_real_device: Error: Binding of interrupt failed! rc=-1
> register_real_device: Real physical device 00:1a.0 registered successfuly!
> IRQ type = INTx
> dm-command: hot insert pass-through pci dev
> register_real_device: Assigning real physical device 0f:03.0 ...
> pt_iomul_init: Error: pt_iomul_init can't open file
> /dev/xen/pci_iomul: No such file or directory: 0xf:0x3.0x0
> pt_register_regions: IO region registered (size=0x00001000 
> base_addr=0xfbe04000)
> pt_register_regions: IO region registered (size=0x00004000 
> base_addr=0xfbe00000)
> pci_intx: intx=1
> register_real_device: Error: Binding of interrupt failed! rc=-1
> register_real_device: Real physical device 0f:03.0 registered successfuly!
> 
> #lspci -v
> 00:1a.0 USB Controller: Intel Corporation 5 Series/3400 Series Chipset
> USB2 Enhanced Host Controller (rev 05) (prog-if 20 [EHCI])
>         Subsystem: eVga.com. Corp. Device 1014
>         Flags: bus master, medium devsel, latency 0, IRQ 10
>         Memory at a0004000 (32-bit, non-prefetchable) [size=4K]
>         Capabilities: [50] Power Management version 2
>         Capabilities: [58] Debug port: BAR=1 offset=00a0
>         Capabilities: [98] PCI Advanced Features
>         Kernel driver in use: xen-pciback
> 0f:03.0 FireWire (IEEE 1394): Texas Instruments TSB43AB22/A
> IEEE-1394a-2000 Controller (PHY/Link) (prog-if 10 [OHCI])
>         Subsystem: nVidia Corporation Device cb84
>         Flags: bus master, medium devsel, latency 64, IRQ 11
>         Memory at fbe04000 (32-bit, non-prefetchable) [size=4K]
>         Memory at fbe00000 (32-bit, non-prefetchable) [size=16K]
>         Capabilities: [44] Power Management version 2
>         Kernel driver in use: xen-pciback
> 
> (Only IRQs 1, 3, 5, 8, 10, 11, 12, 200++ appear in /proc/interrupts)
> 
> >
> > If you look at the full serial log do you see INT_SRC_OVR for IRQ 9?
> > You should see something akin to this:
> >
> ----snip----
> 
> Yes I do:
> 
> [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
> [    0.000000] ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 20 low level)
> [    0.000000] ACPI: IRQ0 used by override.
> [    0.000000] ACPI: IRQ2 used by override.
> [    0.000000] ACPI: IRQ9 used by override.
> [    0.000000] Using ACPI (MADT) for SMP configuration information
> ----snip----
> [    3.405497] Memory: 776452k/13631488k available (3239k kernel code,
> 3146184k absent, 9708852k reserved, 3423k data, 532k init)
> [    3.405579] SLUB: Genslabs=15, HWalign=64, Order=0-3, MinObjects=0,
> CPUs=8, Nodes=1
> [    3.405616] Preemptible hierarchical RCU implementation.
> [    3.405630] NR_IRQS:33024 nr_irqs:2048 16
> [    3.405705] xen: sci override: global_irq=20 trigger=0 polarity=1
> [    3.405708] xen: registering gsi 20 triggering 0 polarity 1
> [    3.405721] xen: --> pirq=20 -> irq=20
> [    3.405728] xen: acpi sci 20

Whoa. 20. That is unusual, but we do set it up. Perhaps ineptly thought.
Is there an option in the BIOS to toggle some ACPI SCI option?

Does /proc/interrupts have this for IRQ 20:

   9:          3          0          0          0          0          0  
xen-pirq-ioapic-level  acpi

?

> [    3.405732] xen: --> pirq=1 -> irq=1
> [    3.405736] xen: --> pirq=2 -> irq=2
> [    3.405739] xen: --> pirq=3 -> irq=3
> [    3.405743] xen: --> pirq=4 -> irq=4
> [    3.405746] xen: --> pirq=5 -> irq=5
> [    3.405750] xen: --> pirq=6 -> irq=6
> [    3.405753] xen: --> pirq=7 -> irq=7
> [    3.405757] xen: --> pirq=8 -> irq=8
> [    3.405760] xen: --> pirq=10 -> irq=10
> [    3.405763] xen: --> pirq=11 -> irq=11
> [    3.405767] xen: --> pirq=12 -> irq=12
> [    3.405770] xen: --> pirq=13 -> irq=13
> [    3.405774] xen: --> pirq=14 -> irq=14
> [    3.405777] xen: --> pirq=15 -> irq=15
> 
> Also, it seems that the same SCI allocation failure and the problems
> described above is also present in the 2.6.39 branch as well. The only
> version which I can get to work reliably with is 2.6.32.xx.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.