|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] [FYI] Much difference between netperf results on every r
Hi, Andrew and Keir,
sorry for the delayed response.
1/ The updated results shows that there is still a large (about +/-75%)
deviation on netperf throughput even with vcpu-pin.
2/ The throughput was rather worse with queue_length=100 (but it was not
significant in accordance with t-test.) Its deviation was still large.
My expectation was that the throughput would be better with less
packet loss and that its deviation would get smaller.
Dom-0 to Dom-U Dom-U to Dom-0
------------------------------------------------------
default queue_length 975.06(+/-5.11) 386.04(+/-292.30)
queue_length=100 954.31(+/-3.03) 293.41(+/-180.94)
(unit: Mbps)
The Xen version was unstable C/S 11834. The number of vcpus is one for
each Domains. Vcpu-pin is configured so that a logical processor is
dedicated to each vcpu.
For comparison, the results for ia64 with the same vcpu-pin
configuration are below (Xen version: ia64-unstable C/S 11961.)
Dom-0 to Dom-U Dom-U to Dom-0
------------------------------------------------------
default queue_length 279.27(+/-5.78) 1444.7(+/-10.06)
queue_length=100 278.37(+/-3.84) 1498.90(+/-12.02)
(unit: Mbps)
Regards,
Hiroya
Keir Fraser wrote:
On 19/10/06 7:56 pm, "Andrew Theurer" <habanero@xxxxxxxxxx> wrote:
the throughput measured by netperf differs from time to time. The
changeset was xen-unstable.hg C/S 11760. This is observed when I
executed a netperf on DomU connecting to a netserver on Dom0 in the
same box. The observed throughput was between 185Mbps to 3854Mbps. I
have never seen such a difference on ia64.
I am also seeing this, but with not as much variability. Actually I am
seeing significantly less throughput (1/6th) for dom0->domU then
domU->dom0, and for dom0->domU, I am seeing +/- 15% variability. I am
looking in to it, but so far I have not discovered anything.
So you know what cpus the domU and dom0 are using? You might try pinning the
domains to cpus and see what happens.
Current suspicion is packet loss due to insufficient receive buffers. Try
specifying module option "netback.queue_length=100" in domain 0. We're
working on a proper fix for 3.0.3-1.
-- Keir
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|