On Saturday June 14 2008 09:18:14 am James Harper wrote:
> I've just uploaded 0.9.9 to http://www.meadowcourt.org/downloads
>
> As a reminder, the wiki page is
> http://wiki.xensource.com/xenwiki/XenWindowsGplPv
Equipment: core 2 duo 5600, 1.83ghz each, 2M, sata drive configured for
UDMA/100; System: fc8 64bit, xen 3.1.2, xen.gz 3.1.3, dom0 2.6.21
Tested hvm: XP Pro SP3, 2002 32bit w/512M, file backed vbd on local disk,
tested w/ iometer 2006-07-27 (1Gb \iobw.tst, 5min run) & iperf 1.7.0 (1 min
run)
I previously reported that 0.9.1 suffered a 20-25% Iometer performance
decrease over 0.8.9 on the 4k pattern (not bad for an initial rewrite from
Wdf -> Wdm), but posted no actual numbers. Thus these are the old 0.8.9
numbers:
pattern 4k, 50% read, 0% random
dynamo on? | io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU
domu w/gplpv| 417.5 | 1.63 | 7.39 | 0 | 27.29
domu w/qemu | 155.4 | 0.60 | -4.60 | 0 | 29.23
dom0 w/2Gb | 891.6 | 3.48 | 1.12 | 574.4 | 0
dom0 w/2Gb | 1033.1 | 4.04 | 0.97 | 242.4 | 0
(2nd dom0 numbers from when booted w/o /gplpv)
pattern 32k, 50% read, 0% random
domu w/gplpv| 228.6 | 7.15 | -4.65 | 0 | 21.64
domu w/qemu | 120.4 | 3.76 | 83.63 | 0 | 28.50
dom0 w/2Gb | 42.0 | 1.31 | 23.80 | 2084.7 | 0
dom0 w/2Gb | 88.3 | 2.76 | 11.32 | 1267.3 | 0
and now the 0.9.9 numbers:
pattern 4k, 50% read, 0% random
dynamo on? | io/s | MB/s | Avg. i/o time(ms} | max i/o time(ms) | %CPU
domu w/gplpv| 336.9 | 1.32 | -65.35 | 0 | 12.78
domu w/qemu | 191.8 | 0.75 | 7.88 | 0 | 17.40
dom0 w/2Gb | 1051.4 | 4.11 | 0.95 | 446.0 | 0
dom0 w/2Gb | 1111.0 | 4.34 | 0.90 | 434.2 | 0
(2nd dom0 numbers from when booted w/o /gplpv)
pattern 32k, 50% read, 0% random
domu w/gplpv| 113.4 | 3.54 | -392.87 | 0 | 7.49
domu w/qemu | 106.3 | 3.32 | 4.13 | 0 | 7.41
dom0 w/2Gb | 47.3 | 1.48 | 21.10 | 2062.9 | 0
dom0 w/2Gb | 77.3 | 2.41 | 12.94 | 1256.6 | 0
There is still a 20% decrease in performance on the 4k pattern. %CPU is way
down, but that may be because I'm using rdesktop instead of vnc.
Now running one domain thread at a time, with any other domains running
the 'idle' task. First the old numbers:
gplpv 0.8.9:
4k pattern | 1170.0 | 4.57 | 7.16 | 0 | 41.34
32k pattern | 287.0 | 8.97 | -30.85 | 0 | 23.39
dom0:
4k pattern | 1376.7 | 5.38 | 0.73 | 365.7 | 0
32k pattern | 1484.3 | 5.80 | 0.67 | 314.4 | 0
and now the new:
gplpv 0.9.9:
4k pattern | 843.3 | 3.29 | -26.36 | 0 | 26.37
32k pattern | 192.9 | 6.03 | 5.17 | 0 | 9.12
dom0:
4k pattern | 1702.7 | 6.65 | 0.59 | 367.0 | 0
32k pattern | 162.7 | 5.08 | 6.14 | 248.7 | 0
There's again a 30% decrease in 4k (& 32k) patterns, and less %CPU.
For network, a tcp test w/0.9.1 (essentially the same as
0.8.9), 'iperf-1.7.0 -c dom0-name -t 60 -r', gave:
domu->dom0: 31 Mb/s
dom0->domu: 36 Mb/s
For a udp test, requesting a 10Mb/s bandwidth, 'iperf-1.7.0 -c dom0-name -t
60 -r -b 10000000' gave:
domu->dom0: 2.6 Mb/s
dom0->domu: 9.9 Mb/s
and for 0.9.9:
For a tcp test, 'iperf-1.7.0 -c dom0-name -t 60 -r':
domu->dom0: 34 Mb/s (better)
dom0->domu: 89 Mb/s (wow!)
For a udp test, requesting a 10Mb/s bandwidth, 'iperf-1.7.0 -c dom0-name -t
60 -r -b 10000000' gave:
domu->dom0: 5.2 Mb/s (better)
dom0->domu: 4.5 Mb/s w/54% loss (worse)
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|