|
|
|
|
|
|
|
|
|
|
xen-devel
Re: [Xen-devel] MPI benchmark performance gap between native linux anddo
xuehai zhang wrote:
Ian,
Thanks for the quick response! Explainations to your comments are inline
below.
I did the following experiments to explore the MPI application
execution performance on both native linux machines and inside of
unpriviledged Xen user domains. I use 8 machines with identical HW
configurations (498.756 MHz dual CPU, 512MB memory, on a 10MB/sec
LAN) and I use Pallas MPI Benchmarks (PMB).
The expreiment results show, running a same MPI benchmark in user
domains usually results in a worse (sometimes very bad) performance
comparing with on native linux machines. The following are the
results for PMB SendRecv benchmark for both experiments (table1 and
table2 report throughput and latency respectively). As you may
notice, SendRecv can achieve a 14.9MB/sec throughput on native linux
machines but can get a maximum 7.07 MB/sec throughput if running
inside of user domains. The latency results also have big gap.
I will appreciate your help if you had the similar experience and
wanna share your insights.
Xen (or any kind of virtualization) is not particularly well suited to
MPI type applications, at least unless you're using Inifiniband or some
other smart NIC that avoids having to use dom0 to do the IO
virtualization.
However, the results you are seeing are lower than I'd expect.
Are you running dom0 and the domU on the same CPU or different CPUs. How
does changing this effect the results?
I did not specify "cpu" option in Xen's configuration file, so I think
both dom0 and domU run on the same CPU (1st CPU). I will try to assign
them to different CPUs later.
I think I said something wrong here. If I do not specify "cpu" option in Xen
config file, I observe
Xen usually assigns the 2nd CPU to domU while running dom0 on the 1st CPU.
Also, are you sure the MTU is the same in all cases?
The outputs of "ifconfig" show MTU is 1500 in all cases.
Further, please can you repeat the experiements with just a dom0 running
on each node.
I will do it and update you later.
Thanks again for the help.
Xuehai
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|