WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] Re: [PATCH] IRQ: fix incorrect logic in __clear_irq_vector

To: "Andrew Cooper" <andrew.cooper3@xxxxxxxxxx>
Subject: [Xen-devel] Re: [PATCH] IRQ: fix incorrect logic in __clear_irq_vector
From: "Jan Beulich" <JBeulich@xxxxxxxxxx>
Date: Fri, 12 Aug 2011 14:41:55 +0100
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>
Delivery-date: Fri, 12 Aug 2011 06:41:40 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4E4526F2.9080704@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <fa051d11b3de19c9cea5.1313154609@xxxxxxxxxxxxxxxxxxxxxxxxxxx> <4E4526F2.9080704@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
>>> On 12.08.11 at 15:13, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> wrote:
> On 12/08/11 14:10, Andrew Cooper wrote:
>> In the old code, tmp_mask is the cpu_and of cfg->cpu_mask and
>> cpu_online_map.  However, in the usual case of moving an IRQ from one
>> PCPU to another because the scheduler decides its a good idea,
>> cfg->cpu_mask and cfg->old_cpu_mask do not intersect.  This causes the
>> old cpu vector_irq table to keep the irq reference when it shouldn't.
>>
>> This leads to a resource leak if a domain is shut down wile an irq has
>> a move pending, which results in Xen's create_irq() eventually failing
>> with -ENOSPC when all vector_irq tables are full of stale references.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
>>
>> diff -r 1f08b380d438 -r fa051d11b3de xen/arch/x86/irq.c
>> --- a/xen/arch/x86/irq.c     Wed Aug 10 14:43:34 2011 +0100
>> +++ b/xen/arch/x86/irq.c     Fri Aug 12 14:09:52 2011 +0100
>> @@ -216,7 +216,7 @@ static void __clear_irq_vector(int irq)
>>  
>>      if (likely(!cfg->move_in_progress))
>>          return;
>> -    for_each_cpu_mask(cpu, tmp_mask) {
>> +    for_each_cpu_mask(cpu, cfg->old_cpu_mask) {

I think you rather want

    cpus_and(tmp_mask, cfg->old_cpu_mask, cpu_online_map);

before the loop, and keep the looping on tmp_mask. Otherwise you're
in danger of accessing offline CPUs' per-CPU data.

>>          for (vector = FIRST_DYNAMIC_VECTOR; vector <= LAST_DYNAMIC_VECTOR;
>>                                  vector++) {
>>              if (per_cpu(vector_irq, cpu)[vector] != irq)
> Apologies for the previous spam of this patch - I failed somewhat with
> patchbomb.
> 
> 2 things come to mind.
> 
> 1) This affects all versions of Xen since per-cpu idts were introduces,
> so is a candidate for backporting to all relevant trees.
> 
> 2) What would the tradeoff be with adding a "u8 old_vector" to irq_cfg? 
> It would increase the size of the cfg structure but would avoid several
> pieces of code which loop through all dynamic vectors and check if the
> irq vector matches?

The size wouldn't grow (there's already a single bit at the end, so a
new u8 would just fill what is currently padding space). I didn't,
however, spot any other loop than the one here that would benefit.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>