|
|
|
|
|
|
|
|
|
|
xen-devel
RE: [Xen-devel] MPI benchmark performance gap between native linux anddo
> I did the following experiments to explore the MPI
> application execution performance on both native linux
> machines and inside of unpriviledged Xen user domains. I use
> 8 machines with identical HW configurations (498.756 MHz dual
> CPU, 512MB memory, on a 10MB/sec LAN) and I use Pallas MPI
> Benchmarks (PMB).
> The expreiment results show, running a same MPI benchmark in
> user domains usually results in a worse (sometimes very bad)
> performance comparing with on native linux machines. The
> following are the results for PMB SendRecv benchmark for both
> experiments (table1 and table2 report throughput and latency
> respectively). As you may notice, SendRecv can achieve a
> 14.9MB/sec throughput on native linux machines but can get a
> maximum 7.07 MB/sec throughput if running inside of user
> domains. The latency results also have big gap.
> I will appreciate your help if you had the similar experience
> and wanna share your insights.
Xen (or any kind of virtualization) is not particularly well suited to
MPI type applications, at least unless you're using Inifiniband or some
other smart NIC that avoids having to use dom0 to do the IO
virtualization.
However, the results you are seeing are lower than I'd expect.
Are you running dom0 and the domU on the same CPU or different CPUs. How
does changing this effect the results?
Also, are you sure the MTU is the same in all cases?
Further, please can you repeat the experiements with just a dom0 running
on each node.
Thanks,
Ian
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
<Prev in Thread] |
Current Thread |
[Next in Thread>
|
- RE: [Xen-devel] MPI benchmark performance gap between native linux anddomU,
Ian Pratt <=
|
|
|
|
|