WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] problem in setting cpumask for physical interrupt

To: <xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-devel] problem in setting cpumask for physical interrupt
From: "Agarwal, Lomesh" <lomesh.agarwal@xxxxxxxxx>
Date: Thu, 25 Oct 2007 17:36:06 -0700
Delivery-date: Thu, 25 Oct 2007 17:36:52 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcgXaDI5jZ0n5YLuSS2nMjrC8ba3Vw==
Thread-topic: problem in setting cpumask for physical interrupt

Why does function pirq_guest_bind (in arch/x86/irq.c) calls set_affinity with cpumask of current processor? If I understand correctly pirq_guest_bind is called in response to guest calling request_irq. So, if by chance all guests call request_irq on the same physical processor then Xen may end up setting interrupt affinity to one physical processor only.

I think Xen should set the affinity to all the processors available. VCPU is not guranteed to run on the same physical processor on which it called request_irq anyway.

I will send a patch if my understanding looks ok.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
<Prev in Thread] Current Thread [Next in Thread>