> Hello everybody,
>
> I tried to measure the performance of the available drivers for
windows as
> a HVM guest.
> I used the gplpv drivers 0.9.11-pre17, the PV drivers from Novell, and
the
> drivers from Citrix XenSource with the XenServer 5.
>
> The Novell and gplpv drivers were more or less at the same speed, for
> both, network and disk performance.
> The disk performance was about 10MB/s reading and writing
sequentially,
> and about 1-1.5MB/s for reading and writing randomly.
> The network speed was about 10-12MB/s, via a GigaBit line.
XPsp2 has a known problem with LSO (which gplpv drivers support... I
assume the xensource drivers will too). Please try stopping the firewall
service (that is _not_ the same as turning off the firewall in network
settings - you actually have to go into services and stop the service).
> The Xensource drivers made at least about 30MB/s reading and writing
> sequentially, but for reading and writing randomly, it was also only
lousy
> 1.5MB/s.
> Via network, over the GigaBit line, with the xensource drivers, the
speed
> was about 78 MB/s.
My testing using iperf has shown that I can reach gigabit speeds on the
network, so it is possible.
> The Windows system was a XP SP2.
> hdparm on the dom0 gives about 60MB/s.
> The network test was an ftp transfer, just downloading a 500MB file,
> without writing it to disk, writing to nul. The same in the dom0,
writing
> the file to /dev/null gave me 112MB/s.
Please use iperf to test network speeds. It's a bit more comparable to
the testing that myself and others have done.
> So I am wondering, what are the expected speed gains for the gplpv
> drivers?
> Is the performance of the drivers bettter with different windows
versions,
> e.g. windows server 2003?
XPsp2 with the firewall service enabled behaves badly when LSO is
enabled.
Windows 2003 sp1 has also performed badly in some testing I have done
(not as badly as XPsp2 - something like 50% worse performance instead of
90% worse performance)
My best results have been under Windows 2003 sp2. SP2 introduced some
pretty hefty improvements to NDIS which is the windows network driver
layer.
The advantages I'd expect to see in performance are due to increased
throughput and lower CPU usage.
Also, depending on the test tool you are using, the gplpv disk drivers
may perform quite poorly. This happens if the tools gives the windows
kernel buffers that are not aligned to a 512 byte boundary.
Please run debugview from sysinternals.com while you are running your
disk performance testing. It will periodically output some stats like:
XenVbd stat_interrupts = 2408914
XenVbd stat_interrupts_for_me = 314368
XenVbd stat_reads = 131507
XenVbd stat_writes = 208567
XenVbd stat_unaligned_le_4096 = 7
XenVbd stat_unaligned_le_8192 = 0
XenVbd stat_unaligned_le_16384 = 0
XenVbd stat_unaligned_le_32768 = 0
XenVbd stat_unaligned_le_65536 = 0
XenVbd stat_unaligned_gt_65536 = 0
XenVbd stat_no_shadows = 0
XenVbd stat_no_grants = 0
XenVbd stat_outstanding_requests = 1
The things I'm interested in are that stat_unaligned_xxx figures. The
only unaligned requests I see during day to day operations are somewhere
between 5 and 10 that occur very very early during boot
(stat_unaligned_le_4096 = 7). However I have seen chkdsk, defrag, and at
least one testing tool issue requests not aligned on 512 byte
boundaries. When that happens, the gplpv drivers have to break the
request into 4096 byte chunks and submit each chunk, one at a time, to
blkback, which really slows things down.
James
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users
|