Modules and applications

From ScientificComputing
Jump to: navigation, search

< Storage and data transfer

Home

Job management >


Environment.png

On the cluster, we provide many centrally installed software and for some software even multiple versions. To configure the environment for a particular software version, we use modules. Modules configure your current computing environment (PATH, LD_LIBRARY_PATH, MANPATH, etc.) to make sure all required binaries and libraries are found.

We employ two types of Modules packages on the cluster:

  • LMOD Modules
  • Environment Modules

Two software stacks on Euler

Old software stack with Environment Modules

Upon your login on Euler, the old software stack is set by default.
Switch from the old to the new software stack with the command:

New software stack with LMOD Modules

All new software is installed exclusively in the new software stack, mostly done with SPACK package manager. Switch from the new to old software stack:

$ env2lmod

In a job script in Bash, add this line before loading modules. See an example here.

source /cluster/apps/local/env2lmod.sh
$ lmod2env

You can set the default software stack to the new one with the command:

$ set_software_stack.sh new

Please note that after setting the default software stack, you need to logout and login again to make the change becoming active.

Modules commands

Here are the most frequently used module commands.

List all modules that match the given module name, e.g., list all available Python

$ module avail python

Load modules, e.g., load GCC 6.3.0 and Python 3.8.5 in the new software stack. Please see Two software stacks on Euler for more details.

$ env2lmod
$ module load gcc/6.3.0 python/3.8.5

List all currently loaded modules

$ module list

Currently Loaded Modules:
 1) StdEnv   2) gcc/6.3.0   3) openblas/0.2.20   4) python/3.8.5


Other commonly used module commands

module                    # get info about module sub-commands
module avail              # List all modules available on the cluster
module key keyword        # list all modules whose description contains keyword
module help name          # get information about module name
module show name          # show what module name does (without loading it)
module unload name        # unload module name
module purge              # unload all modules at once

For the new software stack only

module spider             # report all modules
module spider name        # report all versions of the module name

Where are applications installed?

Centrally installed

Central applications are installed in /cluster/apps

Applications that are needed by many users should be installed centrally, like compilers and libraries

  • Visible and accessible to all users via modules
  • Installed and maintained by cluster support
  • Commercial licenses provided by the IT shop of ETH or by research groups

In $HOME

Users can install additional applications in their home directory, but only if the quotas (space: 16GB, files/directories: 100'000) are not exceeded

  • Avoid anaconda installations as they often conflict with the files/directories quota. Alternatively, you can create a Python virtual environment.
  • For Python and R, packages can easily be installed locally, e.g., for Python
$ pip install --user packagename

The structure of LMOD Modules

LMOD Modules use a hierarchy of modules with three layers to avoid conflicts when multiple modules are loaded at the same time.

  • The core layer contains software which are independent of compilers and MPI libraries, e.g., commercials software which come with their own runtime libraries
$ module load comsol/5.6
  • The compiler layer contains software which are dependent of compilers.
$ module load gcc/6.3.0 hdf5/1.10.1
  • The MPI layer contains software which are dependent of compilers and MPI libraries
$ module load gcc/6.3.0 openmpi/4.0.2 openblas

Lmod toolchains.png

There are four main toolchains

Those compilers can be combined with OpenMPI 3.0.1 or 4.0.2 and OpenBLAS

Modules for GPUs

In the new software stack, we have installed a Python module which is linked with CUDA, cuDNN and NCCL libraries and contains machine learning / deep learning packages such as scikit-Learn, TensorFlow and Pytorch. This module can be loaded with the following commands:

$ env2lmod
$ module load gcc/6.3.0 python_gpu/3.8.5

The following have been reloaded with a version change:
  1) gcc/4.8.5 => gcc/6.3.0

$ module list

Currently Loaded Modules:
  1) StdEnv            4) cuda/11.0.3    7) python_gpu/3.8.5
  2) gcc/6.3.0         5) cudnn/8.0.5
  3) openblas/0.2.20   6) nccl/2.7.8-1

You can also find and load CUDA, cuDNN and NCCL libraries available on the cluster matching your needs.

 $ env2lmod
 $ module load gcc/6.3.0
 $ module avail cuda

------------- /cluster/apps/lmodules/Compiler/gcc/6.3.0 -------------
  cuda/8.0.61     cuda/9.2.88      cuda/11.0.3 (L)
  cuda/9.0.176    cuda/10.0.130    cuda/11.1.1
  cuda/9.1.85     cuda/10.1.243    cuda/11.2.2 (D)

Application lists

Table: Examples of centrally-installed commercial and open source applications
Bioinformatics & life science Finite element methods Machine learning Multi-physics phenomena Quantum chemistry & molecular dynamics Symbolic, numerical and statistical mathematics Visualization
Bamtools
BLAST
Bowtie
CLC Genomics Server
RAxML
Relion
TopHat
Ansys
Abaqus
FEniCS
PyTorch
Scikit-Learn
TensorFlow
Theano
AnsysEDT
COMSOL Multiphysics
STAR-CCM+
ADF
Ambertools
Gaussian
Molcas
Octopus
Orca
Qbox
Turbomole
Gurobi
Mathematica
MATLAB
R
Stata
Ffmpeg
ParaView
VisIT
VTK
Table: Examples of centrally-installed software development tools
Compiler Programming languages Scientific libraries Solvers MPI libraries Build systems Version control
GCC
Intel
C, C++
Fortran, Go
Java, Julia
Perl, Python
Ruby, Scala
Boost, Eigen
FFTW, GMP
GSL, HDF5
MKL, NetCDF
NumPy, OpenBLAS
SciPy
PETSc
Gurobi
Hypre
Trilinos
Open MPI
Intel MPI
MPICH
GNU Autotools
Cmake
qmake
make
CVS
Git
Mercurial
SVN

Example

Further reading


< Storage and data transfer

Home

Job management >