Orca

From ScientificComputing
Jump to: navigation, search

Please note that this application page is referring to the old CentOS software stack which is obsolete and does not work any more with the new Ubuntu setup. You can find an overview on the Ubuntu software stack on this wiki page.

Category

Chemistry, Quantum chemistry, DFT

Description

The program ORCA is an electronic structure program package. ORCA is a flexible, efficient and easy-to-use general purpose tool for quantum chemistry with specific emphasis on spectroscopic properties of open-shell molecules. It features a wide variety of standard quantum chemical methods ranging from semiempirical methods to DFT to single- and multireference correlated ab initio methods. It can also treat environmental and relativistic effects.

Available versions (Euler, old software stack)

Legacy versions Supported versions New versions
3.0.1, 3.0.3 4.0.0, 4.0.0.2, 4.0.1, 4.1.0, 4.1.1, 4.2.0

Please note that this page refers to installations from the old software stack. There are two software stacks on Euler. Newer versions of software are found in the new software stack.

Environment modules (Euler, old software stack)

Version Module load command Additional modules loaded automatically
3.0.1 module load orca/3.0.1
3.0.3 module load orca/3.0.3
4.0.0 module load new orca/4.0.0
4.0.0.2 module load new orca/4.0.0.2
4.0.1 module load new orca/4.0.1
4.1.0 module load new orca/4.1.0
4.1.1 module load new orca/4.1.1
4.2.0 module load new orca/4.2.0

Please note that this page refers to installations from the old software stack. There are two software stacks on Euler. Newer versions of software are found in the new software stack.

How to submit a job

You can submit an orca job (test1.inp) in batch mode with the following command:
sbatch [Slurm options] "orca test1.inp > test1.out"
Here you need to replace [Slurm options] with Slurm parameters for the resource requirements of the job. Please find a documentation about the parameters of sbatch on the wiki page about the batch system.

Parallel jobs

If you would like to run orca in parallel, then you need to load the corresponding open_mpi module. For instance, when using orca 3.0.3, then you would need to load the following modules:
module load orca/3.0.3 open_mpi/1.6.5

Mapping of Orca and OpenMPI versions:

Old software stack:

Orca version OpenMPI version Module load command
3.0.1 1.6.5 module load orca/3.0.1 open_mpi/1.6.5
3.0.3 1.6.5 module load orca/3.0.3 open_mpi/1.6.5
4.0.0 2.0.2 module load new orca/4.0.0 open_mpi/2.0.2
4.0.0.2 2.0.2 module load new orca/4.0.0.2 open_mpi/2.0.2
4.0.1 2.0.2 module load new orca/4.0.1 open_mpi/2.0.2
4.1.0 2.1.1 module load new orca/4.1.0 open_mpi/2.1.1
4.1.1 2.1.1 module load new orca/4.1.1 open_mpi/2.1.1
4.2.0 2.1.1 module load new orca/4.2.0 open_mpi/2.1.1

New software stack:

Orca version OpenMPI version Module load command
4.2.1 4.0.2 module load orca/4.2.1 openmpi/4.0.2
5.0.0 4.0.2 module load orca/5.0.0 openmpi/4.0.2
5.0.1 4.0.2 module load orca/5.0.1 openmpi/4.0.2
5.0.2 4.0.2 module load orca/5.0.2 openmpi/4.0.2

Further more you also need to add a line to the orca input file to specify the number of cores that should be used. For a 4 core job, you would add

%pal nprocs 4

to your orca input file.

The example input file provided on this wiki page would then change to

# Updated input file for the goldchloride example:
! BP ECP{def2-TZVP,def2-TZVP/J} def2-tzvp def2-tzvp/j tightscf opt
%pal nprocs 4
     end
* xyz -2 2
Au  0   0   0
Cl  2.5 0   0
Cl -2.5 0   0
Cl  0   2.5 0
Cl  0  -2.5 0
*

and the submission command would be

sbatch --ntasks=4 --time=1:00:00 --mem-per-cpu=2g --wrap="orca test1.inp > test1.out"
Please note that you also need to request 4 cores from the batch system, when submitting the job. It is not sufficient to just change the orca input file.

Example

As an example for an orca job, we are looking at the SCF optimization of the AuCl42- complex (employing an ECP for gold) in the S=1/2 spin state.
[leonhard@euler01 ~]$ cat test1.inp 
! BP ECP{def2-TZVP,def2-TZVP/J} def2-tzvp def2-tzvp/j tightscf opt
* xyz -2 2
Au  0    0   0
Cl  2.5  0   0
Cl -2.5  0   0
Cl  0    2.5 0
Cl  0   -2.5 0
*
[leonhard@euler01 ~]$ module load orca/3.0.3
[leonhard@euler01 ~]$ bsub -n 1 -W 1:00 -R "rusage[mem=2048]" "orca test1.inp > test1.out"
Generic job.
Job <30746887> is submitted to queue <normal.4h>.
[leonhard@euler01 ~]$ bjobs
JOBID      USER      STAT  QUEUE      FROM_HOST   EXEC_HOST   JOB_NAME   SUBMIT_TIME
30746887   leonhard  PEND  normal.4h  develop01               *test1.out Oct 27 13:56
[leonhard@euler01 ~]$ bjobs
JOBID      USER      STAT  QUEUE      FROM_HOST   EXEC_HOST   JOB_NAME   SUBMIT_TIME
30746887   leonhard  PEND  normal.4h  develop01               *test1.out Oct 27 13:56
[leonhard@euler01 ~]$ bjobs
JOBID      USER      STAT  QUEUE      FROM_HOST   EXEC_HOST   JOB_NAME   SUBMIT_TIME
30746887   leonhard  RUN   normal.4h  develop01   e1126       *test1.out Oct 27 13:56
[leonhard@euler01 ~]$ bjobs
No unfinished job found
[leonhard@euler01 ~]$ grep -B1 -A19 "TOTAL SCF ENERGY" test1.out | tail -21
----------------
TOTAL SCF ENERGY
----------------

Total Energy       :        -1977.27316469 Eh          -53804.33817 eV

Components:
Nuclear Repulsion  :          505.82465996 Eh           13764.18876 eV
Electronic Energy  :        -2483.09782466 Eh          -67568.52693 eV

One Electron Energy:        -3822.25690193 Eh         -104008.89801 eV
Two Electron Energy:         1339.15907728 Eh           36440.37109 eV

Virial components:
Potential Energy   :        -3861.38845416 Eh         -105073.72168 eV
Kinetic Energy     :         1884.11528946 Eh           51269.38351 eV
Virial Ratio       :            2.04944383


DFT components:
N(Alpha)           :       44.999966787926 electrons
[leonhard@euler01 ~]$ 
The resource usage summary for the job can be found in the LSF log file.

License information

Orca license

The main developer of ORCA, Prof. Frank Neese, has changed the ORCA end user license agreement (EULA) for versions 4.0 and newer, such that it does no longer allow computer centers at academic institutions (like the ETH data center, where the Euler is located) to provide a central ORCA installation without access restriction. The new EULA enforces that all users need to register at the ORCA portal:

https://orcaforum.kofo.mpg.de/app.php/portal

After registering, you can use the command get-access orca that will check for the existence of your ORCA account and give you access to it if it succeeds.

Notes

Please note that orca 4 and newer versions are not supported on the Euler cluster and will stay in the new module category. They are provided on an as-is basis as the developers of orca provide only precompiled binaries without any suitable documentation, change logs or list of known issues. The HPC group and some cluster users tested orca 4.0.0 and the bugfix release 4.0.0.2 (orca developers don't provide any information what bugs are fixed with the new release) and noticed problems, when using more than 1 compute node (128 cores). Therefore if you would like to run orca 4 and newer versions on the Euler cluster, then we recommend to use only up to 24 cores.

We have tried to investigate these problems, but since the mpirun calls are hidden in the (stripped) binaries, we don't know how orca calls mpirun and we can not modify anything as we do not get access to the source code. Further more, the end user license agreement (EULA) explicitly forbids decompiling and/or reverse engineering the orca software.

If you encounter a problem with running orca, then please contact the orca developers.

Links

https://orcaforum.cec.mpg.de