|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] More network tests with xenoprofile this time
On Wednesday 01 June 2005 15:03, Jon Mason wrote:
> On Tuesday 31 May 2005 05:48 pm, Ian Pratt wrote:
> > > I have cpu util from polling xc_domain_get_cpu_usage() for
> > > both domains, which is (an exerpt from the whole run, in 3
> > > second intervals):
> > >
> > > cpu0: [100.4] d0-0[100.4]
> > > cpu2: [045.1] d1-0[045.1]
> >
> > OK, so you're confident idle time would be reported OK if there was
> > any.
> >
> > > > Is the Ethernet NIC sharing an interrupt with the USB
> > >
> > > controller per
> > >
> > > > chance?
> > >
> > > Not as far as I can tell:
> > >
> > > CPU0
> > > 11: 6764395 Phys-irq ohci_hcd
> > > 24: 6037311 Phys-irq eth0
> > > 260: 1688517 Dynamic-irq vif1.0
> >
> > Anyone care to suggest hy ohci_hcd is taking so many interrupts?
> > Looks very fishy to me. I take it you're not using a USB Ethernet
> > NIC? :-)
>
> The bladecenters have a shared USB connected to all the blades. I
> would imagine it is the keyboard/mouse or USB CDROM connected to this
> bus that is generating all of these interrupts.
>
> > What happens if you boot 'nousb' ?
>
> This shouldn't hurt anything, unless Andrew needs access to kdb or
> cdrom.
This is on a x336 system, P4 Xeon, not much USB really needed. I did
not see any difference in performace or the profile with nousb.
I also tried disbaling the locks in find_domain_by_id and saw no
difference. I'm curious to see how things differ with dom0 on CPU-0
HT-0 and dom1 on CPU-0 HT-1. I will probably try that next.
FWIW, baremetal linux used about 33% of one cpu to drive the same
throughput. int's/sec was 41k/sec for baremetal vs 59k/sec for dom0.
I don't have the breakdown of int/sec per interrupt number yet.
-Andrew
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|