This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
Home Products Support Community News


[Xen-users] Poor network performance - caused by inadequate vif configur

To: <xen-users@xxxxxxxxxxxxxxxxxxx>
Subject: [Xen-users] Poor network performance - caused by inadequate vif configuration?
From: "Schmidt, Werner (Werner)" <wernerschmidt@xxxxxxxxx>
Date: Thu, 24 May 2007 15:16:29 +0200
Delivery-date: Thu, 24 May 2007 06:15:01 -0700
Envelope-to: www-data@xxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-users-request@lists.xensource.com?subject=help>
List-id: Xen user discussion <xen-users.lists.xensource.com>
List-post: <mailto:xen-users@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-users>, <mailto:xen-users-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-users-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AceeBb2bmMX3nMxyQ76CGYE3o6Z1bg==
Thread-topic: Poor network performance - caused by inadequate vif configuration?



similar to some mail threads found in this forum and some other xen-related threads, I had problems with the network performance of my test system:


  • software base of dom0/domU: RHEL5 (Xen 3.0.3, Redhat 2.6.18-8el5xen SMP kernel)
  • ibm x306 servers with 3Ghz P4 /MT support; coupled via Gigabit Ethernet switch
  • standard xen bridging network configuration
  • test tool: iperf
  • Xen domUs working in PV mode (the P4 does not support VT)


the data transfer rates with ‘iperf’ were as follows:

·         dom0/machine 1  => dom0/machine 2 ~800MBit/s

·         domU/machine 1  => dom0/machine 2 ~700MBit/s

·         dom0/machine 1  => domU/machine 2 ~ 40MBit/s


this flaw of the last test case and the difference between test case 2 and 3 remained more or less constant with various configs of the test systems:

  • credit or  sedf scheduler
  • various configs of the schedulers
  • copy mode and flipping mode of netfront driver


A detailed analysis with tcpdump/wireshark showed that there must be some losses of data within the TCP stream, resulting  in TCP retransmissions and therefore breaks within data transfer (in one test case I saw a transmission gap of 200 ms caused by TCP retransmissions every 230 ms - explaining the breakdown of the data rate).


Now, looking for the reason for the data losses (this was the reason why I checked the copy mode of the netfront driver) I noticed that the txqueuelen parameter of the vif devices connecting the bridge to the domUs were set to ‘32’ (no idea where and for what reason this value is configured initially - note that the txqueuelen value for Ethernet devices is set to 1000 ).

After changing this parameter to higher values (128-512) I got a much higher performance in test case 3: tcp throughput now reaches values of 700MBit/s and higher; using iperf –d option (tcp data streams in both directions) now gave sum values of more than 900 MBit/s.


I’ll evaluate also the other test cases parameter settings to find out the best setting of the parameters, but I think a suited configuration of the txqueuelen parameter of the vif interfaces will be most important for getting a good network performance for a configuration as described above (comparable to other virtualization solutions)








Xen-users mailing list
<Prev in Thread] Current Thread [Next in Thread>
  • [Xen-users] Poor network performance - caused by inadequate vif configuration?, Schmidt, Werner (Werner) <=