开发者

OpenMPI 1.4.3 vs. Intel MPI Efficiency question

开发者 https://www.devze.com 2023-03-20 08:01 出处:网络
I noticed that the exact same code took 50% more time to run on OpenMPI than Intel. I use the following syntax to compile and run:

I noticed that the exact same code took 50% more time to run on OpenMPI than Intel. I use the following syntax to compile and run:

Intel MPI Compiler: Redhat Fedora Core release 3 (Heidelberg), Kernel version: Linux 2.6.9-1.667smp x86_64

 mpiicpc -o xxxx.cpp <filename> -lmpi

OpenMPI 1.4.3: (Centos 5.5 w/ python 2.4.3, Kernel version: Linux 2.6.18-194.el5 x86_64)

 mpiCC xxxx.cpp -o <filename

MPI run command:

 mpirun -np 4 <filename> 

Other hardware specs

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 15
model           : 3
model name      : Intel(R) Xeon(TM) CPU 3.60GHz
stepping        : 4
cpu MHz         : 3591.062
cache size      : 1024 KB
physical id     : 0
siblings        : 2
core id         : 0
cpu cores       : 1
apicid          : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 5
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36    
clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall lmconstant_tsc pni monitor ds_cpl est tm2   
 cid xtpr
 bogomips        : 7182.12
clflush size    : 64
cache_alignment : 128
address sizes   : 36 bits physical, 48 bits virtual
power management:

Can the issue of efficiency be deciphered from the above info? Does the compiler flags have an effect o开发者_Python百科n the efficiency of the simulation. If so, what flags maybe useful to check to be included for Open MPI. Will including MPICH2 increase efficiency in running simulations using OpenMPI?


Is OpenMPI configured to use the same compiler as Intel MPI compiler? Your OpenMPI maybe using gcc, which explains the difference. If OpenMPI is using the same compiler as the Intel MPI compiler, make sure the compiler optimization flags used by both are identical.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号