Since I don't have particularly fast equipment, the significance of these
numbers will be in the relative difference between the no gplpv, 0.9.x and
0.10.x numbers.
Since I am not subscribed to the list (and multiple attempts to resubscribe
haven't changed that), I will not be able to respond without top-posting.
Equipment: core 2 duo T5600, 1.83ghz each, 2M, sata drive configured for
UDMA/100
System: fc8 64bit, xen 3.1.2, xen.gz 3.1.4, dom0 2.6.21
Tested hvm: XP Pro SP3, 2002 32bit w/512M, file backed vbd on local disk,
tested w/ iometer 2006-07-27 (1Gb \iobw.tst, 5min run) & iperf 1.7.0 (1 min
run)
Since this is a file backed vbd, domu numbers are not expected to be faster
than dom0 numbers.
These are the numbers for iometer, booting w/o gplpv:
pattern 4k, 50% read, 0% random
dynamo on? | io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU
domu w/qemu | 108.0 | 0.42 | 12.16 | 0 | 16.03
dom0 w/1Gb | 809.7 | 3.16 | 1.23 | 215.8 | 0
pattern 32k, 50% read, 0% random
domu w/qemu | 74.6 | 2.33 | -602.09 | 0 | 12.55
dom0 w/1Gb | 120.0 | 3.75 | 8.33 | 1142.3 | 0
These are the old 0.9.11-pre13 numbers for iometer:
pattern 4k, 50% read, 0% random
dynamo on? | io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU
domu w/gplpv| 238.0 | 0.93 | 15.89 | 0 | 14.97
dom0 w/1Gb | 974.0 | 3.80 | 1.03 | 444.9 | 0
pattern 32k, 50% read, 0% random
domu w/gplpv| 97.0 | 3.03 | 10.30 | 0 | 18.16
dom0 w/1Gb | 110.0 | 3.44 | 9.08 | 1130.8 | 0
So domu numbers are about twice as fast, with not much difference in the dom0
numbers.
and now the 0.10.0.69 numbers (w/o /patchtpr):
pattern 4k, 50% read, 0% random
dynamo on? | io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU
domu w/gplpv| 386.6 | 1.51 | 2.61 | 0 | 7.99
dom0 w/1Gb | 902.1 | 3.52 | 1.11 | 691.6 | 0
pattern 32k, 50% read, 0% random
domu w/gplpv| 121.9 | 3.81 | 9.99 | 0 | 4.41
dom0 w/1Gb | 59.7 | 1.87 | 16.75 | 1729.0 | 0
The 4k numbers are somewhat faster than 0.9.x, while %CPU for both patterns is
less than half.
and now the 0.10.0.69 numbers (with /patchtpr):
pattern 4k, 50% read, 0% random
dynamo on? | io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU
domu w/gplpv| 769.8 | 3.01 | 1.30 | 0 | 6.46
dom0 w/1Gb | 506.8 | 1.98 | 1.97 | 942.8 | 0
pattern 32k, 50% read, 0% random
domu w/gplpv| 125.4 | 3.92 | 7.97 | 0 | 0.57
dom0 w/1Gb | 58.5 | 1.83 | 17.09 | 1710.0 | 0
There is not as significant a difference between the 4k and 32k patterns,
whereas domu was much slower than dom0 with 0.9.x. %CPU is also half of the
0.9.x numbers for the 4k pattern, and insignificant for the 32k pattern.
Now running one domain thread at a time, with any other domains running
the 'idle' task. This would represent the maximum speed on the domain, w/o
competing tasks. First the old numbers:
booting with no gplpv:
domu:
| io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU
4k pattern | 463.4 | 1.81 | 94.25 | 0 | 30.78
32k pattern | 225.3 | 7.04 | -69.49 | 0 | 18.16
dom0:
4k pattern | 1282.1 | 5.01 | 0.78 | 199.6 | 0
32k pattern | 1484.3 | 5.80 | 0.67 | 314.4 | 0
booting with gplpv:
gplpv 0.9.11-pre13:
| io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU
4k pattern | 857.5 | 3.35 | 5.00 | 0 | 39.48
32k pattern | 202.8 | 6.34 | 4.93 | 0 | 30.90
dom0:
4k pattern | 1361.9 | 5.32 | 0.73 | 218.1 | 0
32k pattern | 173.9 | 5.43 | 5.75 | 188.2 | 0
and now the new:
gplpv 0.10.0.69 (with /patchtpr):
| io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU
4k pattern | 1169.9 | 4.57 | 4.13 | 0 | 10.15
32k pattern | 172.6 | 5.39 | 5.79 | 0 | 0.95
dom0:
4k pattern | 1835.1 | 7.17 | 0.54 | 208.9 | 0
32k pattern | 172.6 | 5.39 | 5.79 | 162.6 | 0
The i/o rate improvement on domu between 0.9.x and 0.10.x is not as
significant as in the multi-threaded case above, but still better. and with
less %CPU.
For network, a tcp test w/0.9.11-pre13, 'iperf-1.7.0 -c dom0-name -t 60 -r',
gave:
domu->dom0: 16 Mb/s
dom0->domu: 44 Mb/s
For a udp test, requesting a 10Mb/s bandwidth, 'iperf-1.7.0 -c dom0-name -t
60 -r -b 10000000' gave:
domu->dom0: 4.7 Mb/s, w/ 3% loss
dom0->domu: 6.2 Mb/s, w/33% loss
and for 0.10.0.69 (w/o /patchtpr):
For a tcp test, 'iperf-1.7.0 -c dom0-name -t 60 -r':
domu->dom0: 2.4 Mb/s (huh?)
dom0->domu: 92 Mb/s (wow!)
For a udp test, requesting a 10Mb/s bandwidth, 'iperf-1.7.0 -c dom0-name -t
60 -r -b 10000000' gave:
domu->dom0: 14.7 kb/s (huh?)
dom0->domu: 8.7 Mb/s w/12% loss (better)
and for 0.10.0.69 (with /patchtpr):
For a tcp test, 'iperf-1.7.0 -c dom0-name -t 60 -r':
domu->dom0: 1 Mb/s (double huh?)
dom0->domu: 220 Mb/s (yowza!)
For a udp test, requesting a 10Mb/s bandwidth, 'iperf-1.7.0 -c dom0-name -t
60 -r -b 10000000' gave:
domu->dom0: 4.9 kb/s w/3% loss (huh?)
dom0->domu: 9.1 Mb/s w/10% loss (better than 0.9.x)
and for no gplpv:
For a tcp test, 'iperf-1.7.0 -c dom0-name -t 60 -r':
domu->dom0: 4.8 Mb/s
dom0->domu: 11.6 Mb/s
For a udp test, requesting a 10Mb/s bandwidth, 'iperf-1.7.0 -c dom0-name -t
60 -r -b 10000000' gave:
domu->dom0: .78 Mb/s
dom0->domu: 9.4 Mb/s
For some odd reason, the domu->dom0 numbers for 0.10.x are worse even than
booting w/no gplpv, let alone the 0.9.x numbers.
The reverse direction tcp numbers for 0.10.x booting w/o /patchtpr are double
the 0.9.x numbers, and are doubled again w/ /patchtpr; the udp numbers are
faster, with less data loss than 0.9.x.
I fooled a bit with the xennet advanced setting numbers, and only changing the
mtu made any (and slight) difference.
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|