On Tuesday 31 August 2010 18:46:28 Keir Fraser wrote:
> On 31/08/2010 09:55, "Jan Beulich" <JBeulich@xxxxxxxxxx> wrote:
> >> In fact, setup_ioapic_dest() would be called to reprogram the IOAPIC
> >> redirection
> >> table to follow "irq_cfg->cpu_mask", after SMP initialization work was
> >> done. So I think the better choice is to keep the original value in
> >> irq_cfg-
> >>> cpu_mask, and just make sure the value we wrote to the IOAPIC
> >>> redirection
> >> table
> >> is valid. Then modifying cpu_mask_to_apicid_flat() seems like a better
> >> idea.
> > Why would you need to modify only this function, but not the other
> > variants? If a CPU in the passed in mask can be offline, then
> > first_cpu() (as used in the other variants) can return an offline CPU,
> > and you don't want to program such into an RTE.
Yes, here is the patch with modification of other variants.
> Indeed, also all other assignments to irq_cfg->cpu_mask include only online
> CPUs, so the current code is only being consistent in that respect.
After reading the code, I think it may not that consistent. For example, seems
set_desc_affinity() and __clear_irq_vector()(as well as many other functions)
assume cfg->cpu_mask contained cpus are all online.
> And in
> the general case (even if not specifically for IRQ0) that is important
> because IDT vectors are not allocated on offline CPUs, and so we could
> otherwise end up with CPUs coming online and finding they are in multiple
> irq_cfg's with the same vector!
I think this still apply, because we still don't allocate vectors for offline
When we allocate vectors, we would check cpu_online_map.
> Also the PIT is usually disabled after boot
> on Xen and so it being restricted to only CPU0 would really not matter. I
> think we should leave the code as is.
But HPET would still being used, which replaced PIT and using IRQ0.
> -- Keir
Description: Text Data
Xen-devel mailing list