|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] [PATCH] IRQ fix incorrect logic in __clear_irq_vector (v
On 12/08/2011 14:54, "Andrew Cooper" <andrew.cooper3@xxxxxxxxxx> wrote:
> In the old code, tmp_mask is the cpu_and of cfg->cpu_mask and
> cpu_online_map. However, in the usual case of moving an IRQ from one
> PCPU to another because the scheduler decides its a good idea,
> cfg->cpu_mask and cfg->old_cpu_mask do not intersect. This causes the
> old cpu vector_irq table to keep the irq reference when it shouldn't.
>
> This leads to a resource leak if a domain is shut down wile an irq has
> a move pending, which results in Xen's create_irq() eventually failing
> with -ENOSPC when all vector_irq tables are full of stale references.
>
> v2: reuse tmp_mask to take account of online cpus
Nasty bug, nice fix!
The extra field in irq_cfg sounds plausible to me -- I don't mind adding it
if it's a nice cleanup.
Thanks,
Keir
> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>
> diff -r 1f08b380d438 -r bd106cc2aa65 xen/arch/x86/irq.c
> --- a/xen/arch/x86/irq.c Wed Aug 10 14:43:34 2011 +0100
> +++ b/xen/arch/x86/irq.c Fri Aug 12 14:54:11 2011 +0100
> @@ -216,6 +216,7 @@ static void __clear_irq_vector(int irq)
>
> if (likely(!cfg->move_in_progress))
> return;
> + cpus_and(tmp_mask, cfg->old_cpu_mask, cpu_online_map);
> for_each_cpu_mask(cpu, tmp_mask) {
> for (vector = FIRST_DYNAMIC_VECTOR; vector <= LAST_DYNAMIC_VECTOR;
> vector++) {
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|