WARNING - OLD ARCHIVES

This is an archived copy of the Xen.org mailing list, which we have preserved to ensure that existing links to archives are not broken. The live archive, which contains the latest emails, can be found at http://lists.xen.org/
   
 
 
Xen 
 
Home Products Support Community News
 
   
 

xen-devel

RE: [Xen-devel] New MPI benchmark performance results (update)

To: "xuehai zhang" <hai@xxxxxxxxxxxxxxx>, <Xen-devel@xxxxxxxxxxxxxxxxxxx>
Subject: RE: [Xen-devel] New MPI benchmark performance results (update)
From: "Ian Pratt" <m+Ian.Pratt@xxxxxxxxxxxx>
Date: Tue, 3 May 2005 14:56:38 +0100
Delivery-date: Tue, 03 May 2005 13:56:22 +0000
Envelope-to: www-data@xxxxxxxxxxxxxxxxxxx
List-help: <mailto:xen-devel-request@lists.xensource.com?subject=help>
List-id: Xen developer discussion <xen-devel.lists.xensource.com>
List-post: <mailto:xen-devel@lists.xensource.com>
List-subscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=subscribe>
List-unsubscribe: <http://lists.xensource.com/cgi-bin/mailman/listinfo/xen-devel>, <mailto:xen-devel-request@lists.xensource.com?subject=unsubscribe>
Sender: xen-devel-bounces@xxxxxxxxxxxxxxxxxxx
Thread-index: AcVPwCjADfJg+871Tty4b8cj6I39TwAJykCw
Thread-topic: [Xen-devel] New MPI benchmark performance results (update)
> 
> In the graphs presented on the webpage, we take the results 
> of native Linux as the reference and normalize the other 3 
> scenarios to it. We observe a general pattern that usually 
> dom0 has a better performance than domU with SMP than domU 
> without SMP (here better performance means low latency and 
> high throughput). However, we also notice very big 
> performance gap between domU (w/o SMP) and native linux (or 
> dom0 because generally dom0 has a very similar performance as 
> native linux). Some distinct examples are: 8-node SendRecv 
> latency (max domU/linux score ~ 18), 8-node Allgather latency 
> (max domU/linux score ~ 17), and 8-node Alltoall latency (max 
> domU/linux > 60). The performance difference in the last 
> example is HUGE and we could not think about a reasonable 
> explaination why transferring 512B message size is so much 
> different than other sizes. We appreciate if you can provide 
> your insight to such a big performance problem in these benchmarks.

I still don't quite understand your experimental setup. What version of
Xen are you using? How many CPUs does each node have? How many domU's do
you run on a single node?

As regards the anomalous result for 512B AlltoAll performance, the best
way to track this down would be to use xen-oprofile. Is it reliably
repeatable? Really bad results are usually due to packets being dropped
somewhere -- there hasn't ben a whole lot of effort put into UDP
performance because so few applications use it.

Ian


 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel