WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-users

Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows

To: xen-users@xxxxxxxxxxxxxxxxxxx
Subject: Re: [Xen-users] Release 0.8.0 of GPL PV Drivers for Windows
From: jim burns <jim_burn@xxxxxxxxxxxxx>
Date: Sun, 2 Mar 2008 11:52:07 -0500
Delivery-date: Sun, 02 Mar 2008 08:52:46 -0800
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
In-reply-to: <20080302095146.GW21162@xxxxxxxxxxxxxxx>
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
References: <AEC6C66638C05B468B556EA548C1A77D0131AEF2@trantor> <200803012225.20141.jim_burn@xxxxxxxxxxxxx> <20080302095146.GW21162@xxxxxxxxxxxxxxx> (sfid-20080302_045552_791737_9827D673)
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: KMail/1.9.9
On Sunday 02 March 2008 04:51:47 am Pasi Kärkkäinen wrote:
> Yep. I asked because of the "bad" 80 Mbit/sec result on a 100 Mbit network.. 

My guess is my cpu(s)  don't have enough raw power to saturate the nic, but 
let's see what happens after your suggestions are implemented.

> That's because with some version of xen and/or drivers (I'm not sure
> actually) it was a known fact that performance got bad when you had hw
> checksum calculations turned on..
>
> So just to see if that's the case here.. I guess this was mostly for domU..

Ahh, because there's no hardware - makes sense.

Alright - let's try one change/set of related changes at a time to isolate 
their effect.

> > rx-checksumming: off
> > tx-checksumming: off
> > scatter-gather: off
> > tcp segmentation offload: off
> > udp fragmentation offload: off
> > generic segmentation offload: off
>
> Maybe try turning on offloading/checksumming settings here?

Ok - before any changes, 'iperf -c insp6400 -t 60' gives 78.7 Mbps.

On SuSE:
[830] > sudo ethtool -K eth0 tx on
Cannot set device tx csum settings: Operation not supported
[2]    23551 exit 85    sudo ethtool -K eth0 tx on
jimb@Dell4550 03/02/08 10:36AM:~
[831] > sudo ethtool -K eth0 tso on
Cannot set device tcp segmentation offload settings: Operation not supported
[2]    23552 exit 88    sudo ethtool -K eth0 tso on

on fc8:
[742] > sudo ethtool -K peth0 tx on
Password:
Cannot set device tx csum settings: Operation not supported
zsh: exit 85    sudo ethtool -K peth0 tx on
jimb@Insp6400 03/02/08 10:38AM:~
[743] > sudo ethtool -K peth0 tso on
Cannot set device tcp segmentation offload settings: Operation not supported
zsh: exit 88    sudo ethtool -K peth0 tso on

Ahem - moving on!

> I usually use at least 256k window sizes :)

Trying adding '-w 262144' on both server and client side, iperf gets 72.1 
Mpbs. Worse.

> Did you try with multiple threads at the same time? Did it have any effect?

Adding '-P 4' to client, iperf gets an aggregate rate of 74.5 Mpbs. Worse.

> "ifconfig eth0 txqueuelen <value>" on linux.. I don't know how to do that
> in windows.
>
> But it's important to do that on dom0 for vifX.Y devices.. those are the
> dom0 sides of the virtual machine virtual NICs.

[740] > ifconfig
[...]
peth0     Link encap:Ethernet  HWaddr 00:15:C5:04:7D:4F
          inet6 addr: fe80::215:c5ff:fe04:7d4f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1492  Metric:1
          RX packets:11545320 errors:0 dropped:237 overruns:0 frame:0
          TX packets:13476839 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:421343991 (401.8 MiB)  TX bytes:4224204231 (3.9 GiB)
          Interrupt:22

tap0      Link encap:Ethernet  HWaddr 0E:92:BB:CA:D8:DA
          inet6 addr: fe80::c92:bbff:feca:d8da/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:14090 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:500
          RX bytes:0 (0.0 b)  TX bytes:7860525 (7.4 MiB)

vif4.0    Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF
          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2595441 errors:0 dropped:0 overruns:0 frame:0
          TX packets:923150 errors:0 dropped:795 overruns:0 carrier:0
          collisions:0 txqueuelen:32
          RX bytes:3366016758 (3.1 GiB)  TX bytes:166299319 (158.5 MiB)

vif16.0   Link encap:Ethernet  HWaddr FE:FF:FF:FF:FF:FF
          inet6 addr: fe80::fcff:ffff:feff:ffff/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:9848 errors:0 dropped:0 overruns:0 frame:0
          TX packets:20828 errors:0 dropped:4698 overruns:0 carrier:0
          collisions:0 txqueuelen:32
          RX bytes:834219 (814.6 KiB)  TX bytes:16017488 (15.2 MiB)

Yowza! 32? Those *are* small!
500 isn't much better for tap0, either.

> > > - Check sysctl net.core.netdev_max_backlog setting.. it should be at
> > > least 1000, possibly even more.. this applies to dom0 and linux domU.
> >
> > Where is this set, and what do I have to restart to make it
> > effective? /etc/sysctl.conf?
>
> Yep, modify /etc/sysctl.conf and run "sysctl -p /etc/sysctl.conf".
>
> > In general, are there any downsides in changing these values?
>
> http://kb.pert.geant2.net/PERTKB/InterfaceQueueLength

Interesting link - thanx.

Ok - setting 'ifconfig eth0 txqueuelen 2500' (peth0 on fc8) and 
net.core.netdev_max_backlog = 2500 on both machines, iperf gets 65.9 Mpbs. 
Worse. Probably only useful for Gpbs links. Removing changes, as in all cases 
above.

> There's something about these settings
>
> btw. what was the CPU usage for dom0 and for domU when you did these iperf
> tests?

About 75%. I've noticed, at least on my SuSE box, that multimedia playback 
suffers over 50%. (wine)

On my WIndows guest, setting tap0's txqueuelen to 1000 had no effect (it 
probably wouldn't since it's receiving); setting window size to 256k hung my 
guest; after rebooting the guest, it had no effect on 2nd try; and changing 
sysctl had no effect. Cpu % was about 80-85% w/o any changes (negligible on 
fc8) averaged over 2 vcpus. With any of the changes above, cpu % went down to 
65-75% - the only change noticed.

Have fun digesting this :-)

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users