WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

Re: [Xen-devel] New MPI benchmark performance results (update)

To: Nivedita Singhvi <niv@xxxxxxxxxx>
Subject: Re: [Xen-devel] New MPI benchmark performance results (update)
From: xuehai zhang <hai@xxxxxxxxxxxxxxx>
Date: Tue, 03 May 2005 17:05:50 -0500
Cc: Xen-devel@xxxxxxxxxxxxxxxxxxx
Delivery-date: Tue, 03 May 2005 22:05:49 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
In-reply-to: <4277DE0E.9010905@xxxxxxxxxx>
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
References: <42774030.4000708@xxxxxxxxxxxxxxx> <4277DE0E.9010905@xxxxxxxxxx>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla Thunderbird 1.0.2 (Windows/20050317)
Hi Nivedita,

Thanks for the response and the suggestion!

Hi all,

In the following post I sent in early April (http://lists.xensource.com/archives/html/xen-devel/2005-04/msg00091.html),



Hi, thanks for sharing the data - it was interesting.
I tried to find additional data on the benchmarks using
the link you have for the user manual but it gave me
a 404 Error.

I corrected the link error and now you can access the user manual through the 
link.

It wasn't clear whether your benchmarks
use TCP or UDP or possibly raw sockets?

I've read through the PMB user manual and it doesn't mention the communication protocol it uses. However, I do read "typically TCP/IP is the protocol used over Ethernet networks for MPI communications" from several references.

As has been pointed out by several people, running the
2.6 kernel and comparing apples to apples as much as possible
would help.

I fully agree with that and currently I try to rerun the experiments by using the same kernel versions for both dom0 and domU (maybe native linux too).

Is there any chance you kept some of the system statistics
and settings (netstat -s, sysctl -a info)?.

I did not collect them while running the benchmarks, but I will try to log them when I rerun the experiments.

Did you tune the settings for the system at all?
No, I did not do any specific things to tune the system.

Alltoall latency (max domU/linux > 60). The performance difference in the last example is HUGE and we could not think about a reasonable explaination why transferring 512B message size is so much different than other sizes. We appreciate if you can provide your insight to such a big performance problem in these benchmarks.


You have an anomalous point on most of the results - and again,
knowing what kind of traffic this is would really help.

I will try to dig into the source code and find it out.

Thanks again for the help.

Xuehai

thanks,
Nivedita




_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel