|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] Fix bind_irq_vector() destination
By the way, could an IRQ's 'domain' be given a better name in Xen? We
already have a meaning for domain, and it makes the code very confusing! Can
we call it cpu_affinity or cpu_binding, or something a bit more meaningful
and distinguishable?
-- Keir
On 26/08/2010 10:14, "Sheng Yang" <sheng@xxxxxxxxxxxxxxx> wrote:
> The "mask" covered all online cpus in the "domain". It should be used as
> destination later, instead of using "domain" directly.
>
> Signed-off-by: Sheng Yang <sheng@xxxxxxxxxxxxxxx>
>
> --
> diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
> --- a/xen/arch/x86/irq.c
> +++ b/xen/arch/x86/irq.c
> @@ -86,14 +86,14 @@
> cpus_and(mask, domain, cpu_online_map);
> if (cpus_empty(mask))
> return -EINVAL;
> - if ((cfg->vector == vector) && cpus_equal(cfg->domain, domain))
> + if ((cfg->vector == vector) && cpus_equal(cfg->domain, mask))
> return 0;
> if (cfg->vector != IRQ_VECTOR_UNASSIGNED)
> return -EBUSY;
> for_each_cpu_mask(cpu, mask)
> per_cpu(vector_irq, cpu)[vector] = irq;
> cfg->vector = vector;
> - cfg->domain = domain;
> + cfg->domain = mask;
> irq_status[irq] = IRQ_USED;
> if (IO_APIC_IRQ(irq))
> irq_vector[irq] = vector;
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|