Orca/Parallel
If you would like to run orca in parallel, then you need to load the corresponding open_mpi module. For instance, when using orca 3.0.3, then you would need to load the following modules:
module load orca/3.0.3 open_mpi/1.6.5
Mapping of Orca and OpenMPI versions:
Old software stack:
Orca version | OpenMPI version | Module load command |
---|---|---|
3.0.1 | 1.6.5 | module load orca/3.0.1 open_mpi/1.6.5 |
3.0.3 | 1.6.5 | module load orca/3.0.3 open_mpi/1.6.5 |
4.0.0 | 2.0.2 | module load new orca/4.0.0 open_mpi/2.0.2 |
4.0.0.2 | 2.0.2 | module load new orca/4.0.0.2 open_mpi/2.0.2 |
4.0.1 | 2.0.2 | module load new orca/4.0.1 open_mpi/2.0.2 |
4.1.0 | 2.1.1 | module load new orca/4.1.0 open_mpi/2.1.1 |
4.1.1 | 2.1.1 | module load new orca/4.1.1 open_mpi/2.1.1 |
4.2.0 | 2.1.1 | module load new orca/4.2.0 open_mpi/2.1.1 |
New software stack:
Orca version | OpenMPI version | Module load command |
---|---|---|
4.2.1 | 4.0.2 | module load orca/4.2.1 openmpi/4.0.2 |
5.0.0 | 4.0.2 | module load orca/5.0.0 openmpi/4.0.2 |
5.0.1 | 4.0.2 | module load orca/5.0.1 openmpi/4.0.2 |
5.0.2 | 4.0.2 | module load orca/5.0.2 openmpi/4.0.2 |
Further more you also need to add a line to the orca input file to specify the number of cores that should be used. For a 4 core job, you would add
%pal nprocs 4
to your orca input file.
The example input file provided on this wiki page would then change to
# Updated input file for the goldchloride example: ! BP ECP{def2-TZVP,def2-TZVP/J} def2-tzvp def2-tzvp/j tightscf opt %pal nprocs 4 end * xyz -2 2 Au 0 0 0 Cl 2.5 0 0 Cl -2.5 0 0 Cl 0 2.5 0 Cl 0 -2.5 0 *
and the submission command would be
sbatch --ntasks=4 --time=1:00:00 --mem-per-cpu=2g --wrap="orca test1.inp > test1.out"
Please note that you also need to request 4 cores from the batch system, when submitting the job. It is not sufficient to just change the orca input file.