Difference between revisions of "Orca/Parallel"
From ScientificComputing
(8 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
− | If you would like to run orca in parallel, then you need to load the <tt>open_mpi | + | If you would like to run orca in parallel, then you need to load the corresponding <tt>open_mpi</tt> module. For instance, when using <tt>orca 3.0.3</tt>, then you would need to load the following modules: |
− | module load open_mpi/1.6.5 orca/3.0.3 | + | module load orca/3.0.3 open_mpi/1.6.5 |
+ | |||
+ | Mapping of Orca and OpenMPI versions: | ||
+ | |||
+ | Old software stack: | ||
+ | |||
+ | {| class="wikitable | ||
+ | !Orca version | ||
+ | !OpenMPI version | ||
+ | !Module load command | ||
+ | |- | ||
+ | |3.0.1 | ||
+ | |1.6.5 | ||
+ | |module load orca/3.0.1 open_mpi/1.6.5 | ||
+ | |- | ||
+ | |3.0.3 | ||
+ | |1.6.5 | ||
+ | |module load orca/3.0.3 open_mpi/1.6.5 | ||
+ | |- | ||
+ | |4.0.0 | ||
+ | |2.0.2 | ||
+ | |module load new orca/4.0.0 open_mpi/2.0.2 | ||
+ | |- | ||
+ | |4.0.0.2 | ||
+ | |2.0.2 | ||
+ | |module load new orca/4.0.0.2 open_mpi/2.0.2 | ||
+ | |- | ||
+ | |4.0.1 | ||
+ | |2.0.2 | ||
+ | |module load new orca/4.0.1 open_mpi/2.0.2 | ||
+ | |- | ||
+ | |4.1.0 | ||
+ | |2.1.1 | ||
+ | |module load new orca/4.1.0 open_mpi/2.1.1 | ||
+ | |- | ||
+ | |4.1.1 | ||
+ | |2.1.1 | ||
+ | |module load new orca/4.1.1 open_mpi/2.1.1 | ||
+ | |- | ||
+ | |4.2.0 | ||
+ | |2.1.1 | ||
+ | |module load new orca/4.2.0 open_mpi/2.1.1 | ||
+ | |} | ||
+ | |||
+ | New software stack: | ||
+ | |||
+ | {| class="wikitable | ||
+ | !Orca version | ||
+ | !OpenMPI version | ||
+ | !Module load command | ||
+ | |- | ||
+ | |4.2.1 | ||
+ | |4.0.2 | ||
+ | |module load orca/4.2.1 openmpi/4.0.2 | ||
+ | |- | ||
+ | |5.0.0 | ||
+ | |4.0.2 | ||
+ | |module load orca/5.0.0 openmpi/4.0.2 | ||
+ | |- | ||
+ | |5.0.1 | ||
+ | |4.0.2 | ||
+ | |module load orca/5.0.1 openmpi/4.0.2 | ||
+ | |- | ||
+ | |5.0.2 | ||
+ | |4.0.2 | ||
+ | |module load orca/5.0.2 openmpi/4.0.2 | ||
+ | |} | ||
Further more you also need to add a line to the orca input file to specify the number of cores that should be used. For a 4 core job, you would add | Further more you also need to add a line to the orca input file to specify the number of cores that should be used. For a 4 core job, you would add | ||
Line 11: | Line 77: | ||
The example input file provided on this wiki page would then change to | The example input file provided on this wiki page would then change to | ||
− | + | # Updated input file for the goldchloride example: | |
! BP ECP{def2-TZVP,def2-TZVP/J} def2-tzvp def2-tzvp/j tightscf opt | ! BP ECP{def2-TZVP,def2-TZVP/J} def2-tzvp def2-tzvp/j tightscf opt | ||
%pal nprocs 4 | %pal nprocs 4 | ||
end | end | ||
* xyz -2 2 | * xyz -2 2 | ||
− | Au 0 0 0 | + | Au 0 0 0 |
− | Cl 2.5 0 0 | + | Cl 2.5 0 0 |
− | Cl -2.5 0 0 | + | Cl -2.5 0 0 |
− | Cl 0 2.5 0 | + | Cl 0 2.5 0 |
− | Cl 0 -2.5 0 | + | Cl 0 -2.5 0 |
* | * | ||
Revision as of 11:44, 5 January 2022
If you would like to run orca in parallel, then you need to load the corresponding open_mpi module. For instance, when using orca 3.0.3, then you would need to load the following modules:
module load orca/3.0.3 open_mpi/1.6.5
Mapping of Orca and OpenMPI versions:
Old software stack:
Orca version | OpenMPI version | Module load command |
---|---|---|
3.0.1 | 1.6.5 | module load orca/3.0.1 open_mpi/1.6.5 |
3.0.3 | 1.6.5 | module load orca/3.0.3 open_mpi/1.6.5 |
4.0.0 | 2.0.2 | module load new orca/4.0.0 open_mpi/2.0.2 |
4.0.0.2 | 2.0.2 | module load new orca/4.0.0.2 open_mpi/2.0.2 |
4.0.1 | 2.0.2 | module load new orca/4.0.1 open_mpi/2.0.2 |
4.1.0 | 2.1.1 | module load new orca/4.1.0 open_mpi/2.1.1 |
4.1.1 | 2.1.1 | module load new orca/4.1.1 open_mpi/2.1.1 |
4.2.0 | 2.1.1 | module load new orca/4.2.0 open_mpi/2.1.1 |
New software stack:
Orca version | OpenMPI version | Module load command |
---|---|---|
4.2.1 | 4.0.2 | module load orca/4.2.1 openmpi/4.0.2 |
5.0.0 | 4.0.2 | module load orca/5.0.0 openmpi/4.0.2 |
5.0.1 | 4.0.2 | module load orca/5.0.1 openmpi/4.0.2 |
5.0.2 | 4.0.2 | module load orca/5.0.2 openmpi/4.0.2 |
Further more you also need to add a line to the orca input file to specify the number of cores that should be used. For a 4 core job, you would add
%pal nprocs 4
to your orca input file.
The example input file provided on this wiki page would then change to
# Updated input file for the goldchloride example: ! BP ECP{def2-TZVP,def2-TZVP/J} def2-tzvp def2-tzvp/j tightscf opt %pal nprocs 4 end * xyz -2 2 Au 0 0 0 Cl 2.5 0 0 Cl -2.5 0 0 Cl 0 2.5 0 Cl 0 -2.5 0 *
and the submission command would be
bsub -n 4 -W 1:00 -R "rusage[mem=2048]" "orca test1.inp > test1.out"
Please note that you also need to request 4 cores from the batch system, when submitting the job. It is not sufficient to just change the orca input file.