Difference between revisions of "New SPACK software stack on Euler"

From ScientificComputing
Jump to: navigation, search
(Example)
Line 5: Line 5:
 
LMOD allows to define a hierarchy of modules containing 3 layers (Core, Compiler, MPI). The '''core''' layer contains all module files which are not depending on any compiler/MPI. The '''compiler''' layer contains all modules which are depending on a particular compilers, but not on any MPI library. The '''MPI''' layer contains modules that are depending on a particular compiler/MPI combination.
 
LMOD allows to define a hierarchy of modules containing 3 layers (Core, Compiler, MPI). The '''core''' layer contains all module files which are not depending on any compiler/MPI. The '''compiler''' layer contains all modules which are depending on a particular compilers, but not on any MPI library. The '''MPI''' layer contains modules that are depending on a particular compiler/MPI combination.
  
When you login to the Leonhard cluster, the standard compiler gcc/4.8.5 is automatically loaded. Running the '''module avail''' command displays all modules that are available for gcc/4.8.5. If you would like to see the modules available for a different compiler, for instance gcc/6.3.0, then you would need to load the compiler module and run '''module avail''' again. For checking out the available modules for gcc/4.8.5 openmpi/3.0.1, you would load the corresponding compiler and MPI module and run again ''module avail'.
+
When you login to the Leonhard cluster, the standard compiler gcc/4.8.5 is automatically loaded. Running the '''module avail''' command displays all modules that are available for gcc/4.8.5. If you would like to see the modules available for a different compiler, for instance gcc/6.3.0, then you would need to load the compiler module and run '''module avail''' again. For checking out the available modules for gcc/4.8.5 openmpi/3.0.1, you would load the corresponding compiler and MPI module and run again ''module avail''.
  
 
As a consequence of the module hierarchy, you can never have two different versions of the same module loaded at the same time. This helps to avoid problems arising due to misconfiguration of the environment.
 
As a consequence of the module hierarchy, you can never have two different versions of the same module loaded at the same time. This helps to avoid problems arising due to misconfiguration of the environment.
Line 22: Line 22:
  
 
  source /cluster/apps/local/lmod2env.sh
 
  source /cluster/apps/local/lmod2env.sh
 
Changing from the current software stack to the new one or vice versa will unload all module before the switch takes place.
 
  
 
==Example==
 
==Example==
Line 48: Line 46:
 
  Lmod is automatically replacing "gcc/4.8.5" with "intel/18.0.1".
 
  Lmod is automatically replacing "gcc/4.8.5" with "intel/18.0.1".
 
   
 
   
  [sfux@eu-login-16-ng ~]$
+
  [sfux@eu-login-16-ng ~]$ echo $MKLROOT
 +
/cluster/apps/intel/parallel_studio_xe_2018_r1/compilers_and_libraries_2018.1.163/linux/mkl
  
 
Switching back to the current software stack and loading again some modules:
 
Switching back to the current software stack and loading again some modules:

Revision as of 09:59, 20 November 2018

Introduction

Currently, we are setting up a new software stack with the package manager SPACK on Euler using the LMOD module system. The new software stack will have 3 toolchains (GCC 4.8.5, GCC 6.3.0 and Intel 18.0.1). A toolchain is a set of applications and libraries that has been compiled with the same compiler.

Hierarchical modules

LMOD allows to define a hierarchy of modules containing 3 layers (Core, Compiler, MPI). The core layer contains all module files which are not depending on any compiler/MPI. The compiler layer contains all modules which are depending on a particular compilers, but not on any MPI library. The MPI layer contains modules that are depending on a particular compiler/MPI combination.

When you login to the Leonhard cluster, the standard compiler gcc/4.8.5 is automatically loaded. Running the module avail command displays all modules that are available for gcc/4.8.5. If you would like to see the modules available for a different compiler, for instance gcc/6.3.0, then you would need to load the compiler module and run module avail again. For checking out the available modules for gcc/4.8.5 openmpi/3.0.1, you would load the corresponding compiler and MPI module and run again module avail.

As a consequence of the module hierarchy, you can never have two different versions of the same module loaded at the same time. This helps to avoid problems arising due to misconfiguration of the environment.

Available software

You can find an overview on all packages that are available in the new software stack on our wiki.

Switching between software stacks

It is possible to switch back and forth between the current and the new software stack, but they cannot be mixed. For this you need to source a script.

Switching from the current to the new software stack:

source /cluster/apps/local/env2lmod.sh

Switching from the new to the current software stack:

source /cluster/apps/local/lmod2env.sh

Example

Loading some modules in the current software stack:

[sfux@eu-login-16-ng ~]$ module load new gcc/4.8.2 open_mpi/1.6.5 openblas/0.2.8_seq
[sfux@eu-login-16-ng ~]$ module list
Currently Loaded Modulefiles:
  1) modules                       3) gcc/4.8.2(default:4.8)        5) openblas/0.2.8_seq(0.2.8)
  2) new                           4) open_mpi/1.6.5(1.6:default)

Switching to the new software stack and load some modules:

[sfux@eu-login-16-ng ~]$ source /cluster/apps/local/env2lmod.sh
[sfux@eu-login-16-ng ~]$ module list  

Currently Loaded Modules:
  1) StdEnv   2) gcc/4.8.5

  

[sfux@eu-login-16-ng ~]$ module load intel/18.0.1 openmpi/3.0.1 openblas/0.2.20

Lmod is automatically replacing "gcc/4.8.5" with "intel/18.0.1".

[sfux@eu-login-16-ng ~]$ echo $MKLROOT
/cluster/apps/intel/parallel_studio_xe_2018_r1/compilers_and_libraries_2018.1.163/linux/mkl

Switching back to the current software stack and loading again some modules:

[sfux@eu-login-16-ng ~]$ source /cluster/apps/local/lmod2env.sh
[sfux@eu-login-16-ng ~]$ module list
Currently Loaded Modulefiles:
  1) modules
[sfux@eu-login-16-ng ~]$ module load new gcc/4.8.2 open_mpi/1.6.5 openblas/0.2.8_seq
[sfux@eu-login-16-ng ~]$ module list
Currently Loaded Modulefiles:
  1) modules                       3) gcc/4.8.2(default:4.8)        5) openblas/0.2.8_seq(0.2.8)
  2) new                           4) open_mpi/1.6.5(1.6:default)
[sfux@eu-login-16-ng ~]$