|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] More network tests with xenoprofile this time
On Wednesday 01 June 2005 15:21, Andrew Theurer wrote:
> On Wednesday 01 June 2005 15:03, Jon Mason wrote:
> > On Tuesday 31 May 2005 05:48 pm, Ian Pratt wrote:
> > > > I have cpu util from polling xc_domain_get_cpu_usage() for
> > > > both domains, which is (an exerpt from the whole run, in 3
> > > > second intervals):
> > > >
> > > > cpu0: [100.4] d0-0[100.4]
> > > > cpu2: [045.1] d1-0[045.1]
> > >
> > > OK, so you're confident idle time would be reported OK if there
> > > was any.
> > >
> > > > > Is the Ethernet NIC sharing an interrupt with the USB
> > > >
> > > > controller per
> > > >
> > > > > chance?
> > > >
> > > > Not as far as I can tell:
> > > >
> > > > CPU0
> > > > 11: 6764395 Phys-irq ohci_hcd
> > > > 24: 6037311 Phys-irq eth0
> > > > 260: 1688517 Dynamic-irq vif1.0
> > >
> > > Anyone care to suggest hy ohci_hcd is taking so many interrupts?
> > > Looks very fishy to me. I take it you're not using a USB Ethernet
> > > NIC? :-)
> >
> > The bladecenters have a shared USB connected to all the blades. I
> > would imagine it is the keyboard/mouse or USB CDROM connected to
> > this bus that is generating all of these interrupts.
> >
> > > What happens if you boot 'nousb' ?
> >
> > This shouldn't hurt anything, unless Andrew needs access to kdb or
> > cdrom.
>
> This is on a x336 system, P4 Xeon, not much USB really needed. I did
> not see any difference in performace or the profile with nousb.
>
> I also tried disbaling the locks in find_domain_by_id and saw no
> difference. I'm curious to see how things differ with dom0 on CPU-0
> HT-0 and dom1 on CPU-0 HT-1. I will probably try that next.
>
> FWIW, baremetal linux used about 33% of one cpu to drive the same
> throughput. int's/sec was 41k/sec for baremetal vs 59k/sec for dom0.
> I don't have the breakdown of int/sec per interrupt number yet.
Wanted to follow up, one correction, I did not have usb disabled
properly, and with properly removing usb, there is a slight reduction
in irq handling overhead as a result:
542129 6.2205 xen-unstable-syms mask_and_ack_level_ioapic_irq
506060 5.8067 xen-unstable-syms end_level_ioapic_irq
475786 5.4593 vmlinux-2.6.11-xen0-up net_tx_action
376309 4.3179 vmlinux-2.6.11-xen0-up tg3_interrupt
263008 3.0178 xen-unstable-syms find_domain_by_id
239789 2.7514 xen-unstable-syms hypercall
224547 2.5765 vmlinux-2.6.11-xen0-up nf_iterate
...vs about 8-9% each for the top two functions before. The interrupt
rate for the tg3 adapter is very high still, about 24k/sec. At that
rate it does not appear to have any interrupt coalescing going on, so I
am going to look into that.
-Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|