|
|
|
|
|
|
|
|
|
|
xen-devel
RE: [Xen-devel] New MPI benchmark performance results (update)
>
> In the graphs presented on the webpage, we take the results
> of native Linux as the reference and normalize the other 3
> scenarios to it. We observe a general pattern that usually
> dom0 has a better performance than domU with SMP than domU
> without SMP (here better performance means low latency and
> high throughput). However, we also notice very big
> performance gap between domU (w/o SMP) and native linux (or
> dom0 because generally dom0 has a very similar performance as
> native linux). Some distinct examples are: 8-node SendRecv
> latency (max domU/linux score ~ 18), 8-node Allgather latency
> (max domU/linux score ~ 17), and 8-node Alltoall latency (max
> domU/linux > 60). The performance difference in the last
> example is HUGE and we could not think about a reasonable
> explaination why transferring 512B message size is so much
> different than other sizes. We appreciate if you can provide
> your insight to such a big performance problem in these benchmarks.
I still don't quite understand your experimental setup. What version of
Xen are you using? How many CPUs does each node have? How many domU's do
you run on a single node?
As regards the anomalous result for 512B AlltoAll performance, the best
way to track this down would be to use xen-oprofile. Is it reliably
repeatable? Really bad results are usually due to packets being dropped
somewhere -- there hasn't ben a whole lot of effort put into UDP
performance because so few applications use it.
Ian
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|