Message Passing Interface
Definition
The Message Passing Interface (MPI) is a portable message-passing standard designed to function on parallel computing architectures.
MPI on Euler
On Euler the following versions of MPI are available through modules:
Version | Module command |
---|---|
OpenMPI 4.1.6 | module load stack/2024-06 openmpi/4.1.6 |
OpenMPI 4.1.1 | module load stack/2024-06 openmpi/4.1.1 |
Intel MPI 2021.10.0 | module load stack/2024-06 intel-oneapi-mpi/2021.10.0 |
mpi4py 3.1.4 | module load stack/2024-06 py-mpi4py/3.1.4 |
InfiniBand vs. Ethernet
Not all nodes in Euler are interconnected with InfiniBand. Some only come with Ethernet. If your application needs a fast interconnect, you can constrain a job to InfiniBand via the Slurm option
-C ib
Intel MPI
Intel MPI requires some configuration options (which are different for each Intel MPI version and nodes type). Users don't know to which node type their jobs get dispatched when not targeting a particular CPU model and therefore we recommend to use a script instead of directly specifying the mpirun command on the slurm command line. In the script you can set configuration options based on the node type. Please find below an example script:
#!/bin/bash # Load your modules here # module load intel # setting configuration options based on intel / node type the job is running on # get hosttype hname=`hostname | cut -d "-" -f 2` echo $INTEL_PYTHONHOME | grep -q '2022.1.2' old_intel=$? if [ $old_intel == 1 ]; then uset I_MPI_PMI_LIBRARY case $hname in g1 | a2p | g9) # euler VI or VII or IX node # set variables for Intel MPI here export FI_PROVIDER=verbs ;; *) # not Euler VI or VII or IX export FI_PROVIDER=tcp ;; esac fi # command to run srun ...
Known Issues
Using TCP provider might result in lower performances (30% on our benchmarks). If you wish to have the best performances with intel, please move to the most recent version of the intel compiler.