WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] Network performance - sending from VM to VM using TCP

To: Cherie Cheung <ccyxen@xxxxxxxxx>
Subject: Re: [Xen-devel] Network performance - sending from VM to VM using TCP
From: Nivedita Singhvi <niv@xxxxxxxxxx>
Date: Thu, 26 May 2005 17:05:27 -0700
Cc: xen-devel@xxxxxxxxxxxxxxxxxxx, xen-users@xxxxxxxxxxxxxxxxxxx
Delivery-date: Fri, 27 May 2005 00:04:51 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4713f85905052522286da17fd8@xxxxxxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <4713f859050525152448a0f609@xxxxxxxxxxxxxx> <4295091D.10505@xxxxxxxxxx> <4713f85905052522286da17fd8@xxxxxxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla Thunderbird 0.8 (X11/20041020)
Cherie Cheung wrote:

Could you test with different send sizes?


No special reason for that. What do you mean by kernel doesn't use the
entire buffer to store the data? I have tried different send size. It
doesn't make any noticable difference.

Normally, if you do a write that fits in the send buffer,
the write will return immediately. If you don't have enough
room, it will block until the buffer drains and there is
enough room. Normally, the kernel reserves a fraction of
the socket buffer space for internal kernel data management.
If you do a setsockopt of 128K bytes, for instance, and then
do a getsockopt(), you will notice that the kernel will report
twice what you asked for.


The performance only improved a little.

TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw15.ucsd.edu
(172.19.222.215) port 0 AF_INET
Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 1398080 1398080 1398080 80.39 26.55

Ah, the idea is not to use such a large send message
size! Increase your buffer sizes - but not your send
message size..Not sure if netperf handles that well -
this is a memory allocation issue. netperf is an intensive
application in TCP streams - the application does no disk
activity - it's generating data on the fly, and doing
repeated writes of that amount.  You might just be
blocking on memory.

I'd be very interested in what you get with those buffer
sizes and 1K, 4K, 16K message sizes..

can't compare with that of domain0 to domain0.

So both domains have 128MB? Can you bump that up to, say, 512MB?

Were you seeing losses, queue overflows?

how to check that?

you can do a netstat -s, ifconfig, for instance.

is it really the problem with the buffer size and send size? domain0
can achieve such good performance under the same settings. Is the
bottleneck related to the overhead in the VM that causes the problem?

also, I had performed some more tests:
with bandwidth 150Mbit/s and RTT 40ms

domain0 to domain0
Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 65536 65536 80.17 135.01 vm to vm Recv Send Send Socket Socket Message Elapsed Size Size Size Time Throughput bytes bytes bytes secs. 10^6bits/sec 87380 65536 65536 80.55 134.80
under these setting, VM to VM performed as good as domain0 to domain0.
if I increased or decreased the BDP, the performance dropped again.

Very interesting - possibly you're managing to send
closer to your real bandwidth-delay-product? Would be
interesting to get the numbers across a range of RTTs.

thanks,
Nivedita



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel