WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] IRQ SMP affinity problems in domU with vcpus > 4 on HP P

To: "Cinco, Dante" <Dante.Cinco@xxxxxxx>
Subject: Re: [Xen-devel] IRQ SMP affinity problems in domU with vcpus > 4 on HP ProLiant G6 with dual Xeon 5540 (Nehalem)
From: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx>
Date: Thu, 15 Oct 2009 21:40:30 -0400
Cc: "xen-devel@xxxxxxxxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxxxxxxxx>, Keir Fraser <keir.fraser@xxxxxxxxxxxxx>, Qing He <qing.he@xxxxxxxxx>, "xiantao.zhang@xxxxxxxxx" <xiantao.zhang@xxxxxxxxx>
Delivery-date: Thu, 15 Oct 2009 18:49:14 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <20091016000942.GA24471@xxxxxxxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <20091012055456.GA19390@ub-qhe2> <2B044E14371DA244B71F8BF2514563F503F47298@xxxxxxxxxxxxxxxxx> <20091016000942.GA24471@xxxxxxxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mutt/1.5.19 (2009-01-05)
On Thu, Oct 15, 2009 at 08:09:42PM -0400, Konrad Rzeszutek Wilk wrote:
> On Wed, Oct 14, 2009 at 01:54:33PM -0600, Cinco, Dante wrote:
> > I switched over to Xen 3.5-unstable (changeset 20303) and pv_ops dom0 
> > 2.6.31.1 hoping that this would resolve the IRQ SMP affinity problem. I had 
> > to use pci-stub to hide the PCI devices since pciback wasn't working. With 
> > vcpus=16 (APIC routing is physical flat), the interrupts were working in 
> > domU and being routed to CPU0 with the default smp_affinity (ffff) but as 
> > soon as I changed it to any 16-bit one-hot value or even setting it to the  
> > same default value resulted in a complete loss of interrupts (even in the 
> > devices that didn't have any change to smp_affinity). With vcpus=4 (APIC 
> > routing is logical flat), I can see the interrupts being load balanced 
> > across all CPUs but as soon as I changed smp_affinity to any value, the 
> > interrupts stopped. This used to work reliably with the non-pv_ops kernel. 
> > I attached the logs in case anyone wants to take a look.
> > 
> > I did see the MSI message address/data change in both domU and dom0 (using 
> > "lspci -vv"):
> > 
> > vcpus=16:
> > 
> > domU MSI message address/data with default smp_affinity: Address: 
> > 00000000fee00000  Data: 40a9
> > domU MSI message address/data after smp_affinity=0010:   Address: 
> > 00000000fee08000  Data: 40b1 (8 is APIC ID of CPU4)
> 
> What does Xen tell you (hit Ctrl-A three times and then 'z'). Specifically 
> look for vector 169 (a9) and 177 (b1).
> Do those values match with what you see in DomU and Dom0? Mainly that 177 has 
> dest_id of 8.
> Oh, and also check the guest interrupt information, to see if those values 
> match..

N/m. I was thinking that maybe your IOAPIC has those vectors programmed in it. 
But
that would not make any sense.

> > 
> > dom0 MSI message address/data with default smp_affinity: Address: 
> > 00000000fee00000  Data: 4094
> > dom0 MSI message address/data after smp_affinity=0010:   Address: 
> > 00000000fee00000  Data: 409c

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

<Prev in Thread] Current Thread [Next in Thread>