MPI on Euler
The recent network drivers on Euler for the Infiniband high-speed interconnect are no longer supporting the BTL OpenIB transport layer that is for instance used in OpenMPI <= 4.0.2. This has some consequences for MPI jobs that users run on Euler. Further more the Euler VI and VII nodes have very new Mellanox ConnectX-6 network cards, which only support very recent MPI versions. That jobs can make use of this high-speed interconnect between the compute nodes, the MPI implementation which is used needs to be compatible with the current network driver (Mellanox OFED) and may some configuration options need to be set.
Infiniband vs. Ethernet
Please note that not all hardware generations in Euler have the infiniband highspeed interconnect, which gives you the best performance for MPI jobs. Euler V and VIII nodes are connected with Ethernet. If you run an MPI job, then please add the sbatch option
to assure that your job is dispatched to nodes that have infiniband.
OpenMPI 4.0.2 or newer
Jobs using OpenMPI 4.0.2 or new should not have any problems using Infiniband as OpenMPI is compiled with UCX support. No additional configuration is required. Multi node jobs will also run fine on Euler VI and VII nodes
OpenMPI older than 4.0.2
For OpenMPI versions older than 4.0.2 there are certain restrictions. On Euler VI and VII nodes, running an unsupported OpenMPI version works, but only for single node jobs (those don't use the infiniband interconnect), which limits those to 128 cores. Multi node jobs only work with OpenMPI >= 4.0.2. We implemented a rule in the batch system that prevents multi node jobs that use an unsupported OpenMPI version cannot be dispatched to Euler VI and VII nodes.
Jobs using OpenMPI older than 4.0.2 cannot use the Infiniband network (unless you compiled a version yourself with UCX support) in Euler as the centrally provided installations are using the BTL OpenIB transport layer from verbs, which is no longer supported by the current network driver (Mellanox OFED). You can still run jobs with those installations (for instance to reproduce older results) on Euler III, IV and V nodes, but you would need to disable the BTL OpenIB transport layer with the following mpirun option:
-mca btl ^openib
This will disable the Infiniband high-speed interconnect and jobs will fall back to use the slower ethernet, but this still allows to reproduce older results just with a lower performance.
Intel MPI requires some configuration options (which are different for each Intel MPI version and nodes type). Users don't know to which node type their jobs get dispatched when not targeting a particular CPU model and therefore we recommend to use a script instead of directly specifying the mpirun command on the slurm command line. In the script you can set configuration options based on the node type. Please find below an example script:
#!/bin/bash # Load your modules here # module load intel # setting configuration options based on intel / node type the job is running on # get hosttype hname=`hostname | cut -d "-" -f 2` echo $INTEL_PYTHONHOME | grep -q '2022.1.2' old_intel=$? if [ $old_intel == 1 ]; then export I_MPI_PMI_LIBRARY=/cluster/apps/gcc-4.8.5/openmpi-4.1.4-pu2smponvdeu574nqolsw4rynnagngch/lib/libpmi.so case $hname in g1 | a2p) # euler VI or VII node # set variables for Intel MPI here export FI_PROVIDER=verbs ;; *) # not Euler VI or VII export FI_PROVIDER=tcp ;; esac fi # command to run srun ...
Using TCP provider might result in lower performances (30% on our benchmarks). If you wish to have the best performances with intel, please move to the most recent version of the intel compiler.