Cherie Cheung wrote:
Could you test with different send sizes?
No special reason for that. What do you mean by kernel doesn't use the
entire buffer to store the data? I have tried different send size. It
doesn't make any noticable difference.
Normally, if you do a write that fits in the send buffer,
the write will return immediately. If you don't have enough
room, it will block until the buffer drains and there is
enough room. Normally, the kernel reserves a fraction of
the socket buffer space for internal kernel data management.
If you do a setsockopt of 128K bytes, for instance, and then
do a getsockopt(), you will notice that the kernel will report
twice what you asked for.
The performance only improved a little.
TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to dw15.ucsd.edu
(172.19.222.215) port 0 AF_INET
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
1398080 1398080 1398080 80.39 26.55
Ah, the idea is not to use such a large send message
size! Increase your buffer sizes - but not your send
message size..Not sure if netperf handles that well -
this is a memory allocation issue. netperf is an intensive
application in TCP streams - the application does no disk
activity - it's generating data on the fly, and doing
repeated writes of that amount. You might just be
blocking on memory.
I'd be very interested in what you get with those buffer
sizes and 1K, 4K, 16K message sizes..
can't compare with that of domain0 to domain0.
So both domains have 128MB? Can you bump that up to, say, 512MB?
Were you seeing losses, queue overflows?
how to check that?
you can do a netstat -s, ifconfig, for instance.
is it really the problem with the buffer size and send size? domain0
can achieve such good performance under the same settings. Is the
bottleneck related to the overhead in the VM that causes the problem?
also, I had performed some more tests:
with bandwidth 150Mbit/s and RTT 40ms
domain0 to domain0
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 65536 65536 80.17 135.01
vm to vm
Recv Send Send
Socket Socket Message Elapsed
Size Size Size Time Throughput
bytes bytes bytes secs. 10^6bits/sec
87380 65536 65536 80.55 134.80
under these setting, VM to VM performed as good as domain0 to domain0.
if I increased or decreased the BDP, the performance dropped again.
Very interesting - possibly you're managing to send
closer to your real bandwidth-delay-product? Would be
interesting to get the numbers across a range of RTTs.
thanks,
Nivedita
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|