Singularity
Contents
Introduction
Singularity is a container solution created by necessity for scientific and application driven workloads. The containers can be used to package entire scientific workflows, software and libraries, and even data. You have the possibility to use pre-built containers, create your own containers or even directly run Docker containers.
Note: Singularity has been renamed to apptainer, but on Euler, the command is still singularity.
Installed version
3.8.0
Requirements
Using the HPC infrastructure to run singularity containers can be a security risk for the cluster. Therefore the access is restricted. To get access to it, you can run the following command: get-access.
Workflow
You can either download pre-built containers from external sources (for instance container library or Docker hub), or build your own containers on your workstation (creation of containers requires root privileges and can therefore not be done directly on the HPC clusters).
For the development of a workflow within a container, it might be useful to run singularity in a lightweight batch interactive job:
srun --ntasks=1 --pty bash
Within such an interactive job, you get a shell directly on one of the compute nodes, which allows to run singularity commands interactively. Once the development of the workflow has finished, the containers can also be run in batch mode:
sbatch --ntasks=1 --mem-per-cpu=2048 --time=1:00:00 --wrap="singularity run hello-world.simg"
Multi Node Computation
If you plan to use a container on multiple nodes with MPI, you can base your workflow on the following example (code hosted here). In your Dockerfile, you will need to build MPI and then copy it to your own container. Don't forget to compile with UCX in order to have good performances.
# Build MPI FROM ubuntu:22.04 as mpi ARG OMPI_VERSION=4.1.4 ENV OMPI_URL="https://download.open-mpi.org/release/open-mpi/v4.1/openmpi-$OMPI_VERSION.tar.bz2" ENV OMPI_DIR=/opt/ompi # Very important to have ucx ENV OMPI_OPTIONS="--enable-shared --enable-static --disable-silent-rules --with-slurm --with-io-romio-flags='--with-file-system=nfs+ufs+lustre --with-elan=no' --enable-mpi-cxx --with-pmi=/slurm --with-ucx" # We need to copy our own slurm libraries COPY slurm /slurm # Installing required packages... RUN apt-get update && \ apt-get install -y wget git bash gcc gfortran g++ make file bzip2 libucx0 libucx-dev && \ rm -rf /var/lib/apt/lists/* # Installing Open MPI RUN mkdir -p /tmp/ompi && \ mkdir -p /opt && \ cd /tmp/ompi && \ wget -O openmpi-${OMPI_VERSION}.tar.bz2 ${OMPI_URL} && \ tar -xjf openmpi-${OMPI_VERSION}.tar.bz2 && \ cd /tmp/ompi/openmpi-${OMPI_VERSION} && \ ./configure --prefix=${OMPI_DIR} ${OMPI_OPTIONS} && \ make install # Now we can build whatever image we want FROM ubuntu:22.04 # Ensure that OpenMPI can be found ENV OMPI_DIR=/opt/ompi ENV PATH=$PATH:$OMPI_DIR/bin ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$OMPI_DIR/lib # Install required dependencies for MPI RUN apt-get update && \ apt-get install -y make gcc libucx0 libpmi2-0-dev && \ rm -rf /var/lib/apt/lists/* # Copy Open MPI COPY --from=mpi /opt/ompi $OMPI_DIR # Copy own files COPY makefile mpiBench /data/ # Compile RUN cd /data && \ make
where the data from slurm can be imported scp -r USER@euler.ethz.ch:/cluster/apps/slurm ..
Finally, you can run the container in slurm with the following script. Don't forget to bind the scratch or you could have issues with temporary files.
#!/bin/bash #SBATCH -n 4 -N 2 --ntasks-per-node=2 -C ib #SBATCH --time=00:05:00 #SBATCH --exclusive --contiguous module load openmpi mpirun -np 4 singularity exec --bind /scratch:/scratch docker://registry.euler.hpc.ethz.ch/node-testing/mpi-bench:latest /data/mpiBench
Security
Singularity is using executables that have the setuid bit set, i.e., they are executed with higher privileges than processes without the setuid bit set. This has implications with regards to security. If there is a vulnerability out in the wild, that can use binaries with setuid bit set for privilege escalation, then we will immediately have to disable singularity until there is a patch that can resolve the issue.
Settings
We recommend you set the following environment variables:
export APPTAINER_CACHEDIR="$SCRATCH/.singularity" export APPTAINER_TMPDIR="${TMPDIR:-/tmp}"
The first, APPTAINER_CACHEDIR stores singularity images in your $SCRATCH directory. The second, APPTAINER_TMPDIR, uses the local temporary directory to store temporary data when building images.
Support
For our high-performance clusters, we provide the infrastructure to run singularity containers. We provide help with problems regarding how to run your container on either the Euler or the Leonhard cluster, but we cannot provide support for the software installations inside a container. If you encounter a problem regarding the installation of software inside the container, then please contact the developer that created the container.
Troubleshooting
If you receive an error message, that you cannot run singularity, then please check first if you are a member of the singularity user group (ID-HPC-SINGULARITY). Please run the command
id | grep ID-HPC-SINGULARITY
on a login node. If you get an empty output
[samapps@eu-login-16-ng ~]$ id | grep ID-HPC-SINGULARITY [samapps@eu-login-16-ng ~]$
then your user account is not a member of the singularity user group. If you are member of a shareholder group and would like to use singularity, then please use the command get-access to request access to it.