WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] I/O descriptor ring size bottleneck?

To: xen-devel@xxxxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] I/O descriptor ring size bottleneck?
From: Diwaker Gupta <diwakergupta@xxxxxxxxx>
Date: Sun, 20 Mar 2005 13:47:58 -0800
Delivery-date: Sun, 20 Mar 2005 21:48:55 +0000
Domainkey-signature: a=rsa-sha1; q=dns; c=nofws; s=beta; d=gmail.com; h=received:message-id:date:from:reply-to:to:subject:mime-version:content-type:content-transfer-encoding; b=QaLM2xZzvdQfeHIawbPRAHeRSQ8v/jsq/cX83s8A8AAUlF/yo2/SNH0JiWI2o9ww0vTUxao3ZCF0a5o6O9AezDbHqinn4vftwIMJiSnbydeVTzdvbb4zjpxFrMVGWipb6rYNULUK5Yo92n3x0rhTZx83xLdl4g9SufWPCv1i+60=
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-archive: <http://sourceforge.net/mailarchive/forum.php?forum=xen-devel>
List-help: <mailto:xen-devel-request@lists.sourceforge.net?subject=help>
List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
List-post: <mailto:xen-devel@lists.sourceforge.net>
List-subscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=subscribe>
List-unsubscribe: <https://lists.sourceforge.net/lists/listinfo/xen-devel>, <mailto:xen-devel-request@lists.sourceforge.net?subject=unsubscribe>
Reply-to: Diwaker Gupta <diwakergupta@xxxxxxxxx>
Sender: xen-devel-admin@xxxxxxxxxxxxxxxxxxxxx
Hi everyone,

I'm doing some networking experiments over high BDP topologies. Right
now the configuration is quite simple -- two Xen boxes connected via a
dummynet router. The dummynet router is set to limit bandwidth to
500Mbps and simulate an RTT of 80ms.

I'm using the following sysctl values:
net.ipv4.tcp_rmem = 4096        87380   4194304
net.ipv4.tcp_wmem = 4096        65536   4194304
net.core.rmem_max = 8388608
net.core.wmem_max = 8388608
net.ipv4.tcp_bic = 0

(tcp westwood and vegas are also turned off for now)

Now if I run 50 netperf flows lasting 80 seconds (1000RTTs) from
inside  a VM on one box talking to the netserver on the VM on the
other box, I get a per flow throughput of around ~2.5Mbps (which
sucks, but lets ignore the absolute value for the moment).

If I run the same test, but this time from inside dom0, I get a per
flow throughput of around 6Mbps.

I'm trying to understand the difference in performance. It seems to me
that the I/O descriptor ring sizes are hard coded to 256 -- could that
be a bottleneck here? If not, have people experience similar problems?

TIA
-- 
Diwaker Gupta
http://resolute.ucsd.edu/diwaker


-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel

<Prev in Thread] Current Thread [Next in Thread>