WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

[Xen-devel] New MPI benchmark performance results (update)

To: Xen-devel@xxxxxxxxxxxxxxxxxxx
Subject: [Xen-devel] New MPI benchmark performance results (update)
From: xuehai zhang <hai@xxxxxxxxxxxxxxx>
Date: Tue, 03 May 2005 04:11:12 -0500
Delivery-date: Tue, 03 May 2005 09:11:12 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
User-agent: Mozilla Thunderbird 1.0.2 (Windows/20050317)
Hi all,

In the following post I sent in early April (http://lists.xensource.com/archives/html/xen-devel/2005-04/msg00091.html), I reported some performance gap when running PMB SendRecv benchmark on both native Linux and domU. Now I've prepared a webpage comparing 8 PMB benchmarks' performance under 4 scenarios (native Linux, dom0, domU with SMP, and domU without SMP) at http://people.cs.uchicago.edu/~hai/vm1/vcluster/PMB/.

In the graphs presented on the webpage, we take the results of native Linux as the reference and normalize the other 3 scenarios to it. We observe a general pattern that usually dom0 has a better performance than domU with SMP than domU without SMP (here better performance means low latency and high throughput). However, we also notice very big performance gap between domU (w/o SMP) and native linux (or dom0 because generally dom0 has a very similar performance as native linux). Some distinct examples are: 8-node SendRecv latency (max domU/linux score ~ 18), 8-node Allgather latency (max domU/linux score ~ 17), and 8-node Alltoall latency (max domU/linux > 60). The performance difference in the last example is HUGE and we could not think about a reasonable explaination why transferring 512B message size is so much different than other sizes. We appreciate if you can provide your insight to such a big performance problem in these benchmarks.

BTW, all the benchmarking is based on the original Xen code. That is, we didn't modify the net_rx_action netback to kick the frontend after every packet as suggested by Ian in the following post (http://lists.xensource.com/archives/html/xen-devel/2005-04/msg00180.html)

Please let me know if you have any questions about the configuration of the benchmarking experiments. I am looking forward to your insightful explainations.

Thanks.

Xuehai

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel