This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


Re: [Xen-devel] xen 4 only seeing one keyboard and mouse

On Thu, Aug 26, 2010 at 11:24:48PM +0100, M A Young wrote:
> On Thu, 26 Aug 2010, M A Young wrote:
> >On Thu, 26 Aug 2010, M A Young wrote:
> >
> >>Okay, here is my first attempt at dirty debugging. I have made a
> >>patch to try to track where vector_irq is being changed
> >>(attached) and have also attached. I have looked at it quickly,
> >>and I don't think some low IRQs are getting set on the second
> >>CPU.

Yeah, that definitly is the problem. Now to why it is happening.
> >
> >My next thoughts on this are that almost all IRQs allocated on the
> >first cpu before the second is started aren't initialized on the
> >second CPU. I presume that __setup_vector_irq from
> >xen/arch/x86/irq.c is where it is supposed to happen
> or perhaps it should happen in io_apic_set_pci_routing from
> xen/arch/x86/io_apic.c where the higher IRQs are set (it doesn't
> because __assign_irq_vector sees the IRQ is already in use
>      old_vector = irq_to_vector(irq);
>     if (old_vector) {
>         cpus_and(tmp_mask, mask, cpu_online_map);
>         cpus_and(tmp_mask, cfg->domain, tmp_mask);
>         if (!cpus_empty(tmp_mask)) {
>             cfg->vector = old_vector;
>             return 0;
>         }
>     }
> but seems to miss the fact that it is only actually configured for
> one cpu.

<nods> That code is actually copied from the Linux kernel .. so that begs
the question how does it work under baremetal?

Lets recap, on Xen we do this in three stages:

 1). Set up all the legacy IRQs. We don't know yet how many CPUs we have
     so we just set up the first sixteen IRQs of the IOAPIC to the first CPU 
     We also go through the IDT for CPU0 and set the IDT->IRQ for those legacy

  2). Then we find out we got more CPUs, and for the other CPUs (1), we
      setup the IDT and we make the IDT->IRQ(-1).

  3). Then when Dom0 starts, we get called for those IRQs once more. And we
      set the IDT for both CPUs to point to the same IRQ:

(XEN) __assign_irq_vector: setting vector_irq[160]=16 for cpu=0
(XEN) __assign_irq_vector: setting vector_irq[160]=16 for cpu=1

        except we don't do it for those that have been already set.
So I wonder how this works on baremetal. I've an inkling, but looking at the
boolean logic it doesn't make much sense. So this is the file
arch/x86/kernel/apic/io_apic.c and the function is:  setup_IO_APIC_irq

1442          * For legacy irqs, cfg->domain starts with cpu 0 for legacy
1443          * controllers like 8259. Now that IO-APIC can handle this irq, 
1444          * the cfg->domain.
1445          */
1446         if (irq < legacy_pic->nr_legacy_irqs && cpumask_test_cpu(0, 
1447                 apic->vector_allocation_domain(0, cfg->domain);
1449         if (assign_irq_vector(irq, cfg, apic->target_cpus()))
1450                 return;
1452         dest = apic->cpu_mask_to_apicid_and(cfg->domain, 
1454         apic_printk(APIC_VERBOSE,KERN_DEBUG
1455                     "IOAPIC[%d]: Set routing entry (%d-%d -> 0x%x -> "
1456                     "IRQ %d Mode:%i Active:%i)\n",
1457                     apic_id, mp_ioapics[apic_id].apicid, pin, cfg->vector,
1458                     irq, trigger, polarity);

If you could, can you instrument it to print the cfg->domain, before the 
is called, and as well instrument the assign_irq_vector similary to what you 
did with Xen?

And also instrument the 'dest' value. Basically the idea is to get an idea of 
what the
per_cpu(vector) gets set during the bootup for legacy IRQs. Similary to what 
you did
with Xen.

Xen-devel mailing list