NAMD
Contents
Category
Chemistry, Molecular DynamicsDescription
Nanoscale Molecular Dynamics (NAMD, formerly Not Another Molecular Dynamics Program) is computer software for molecular dynamics simulation, written using the Charm++ parallel programming model. It is noted for its parallel efficiency and is often used to simulate large systems (millions of atoms). It has been developed by the joint collaboration of the Theoretical and Computational Biophysics Group (TCB), and the Parallel Programming Laboratory (PPL), at the University of Illinois at Urbana–Champaign.Available versions (Euler, old software stack)
Legacy versions | Supported versions | New versions |
---|---|---|
2.10 | 2.10_test |
Please note that this page refers to installations from the old software stack. There are two software stacks on Euler. Newer versions of software are found in the new software stack.
Environment modules (Euler, old software stack)
The namd module is a smart module, which checks for a loaded compiler module and MPI library then loads the corresponding NAMD version. For the module load command example, we use the standard compiler gcc/4.8.2 and the standard MPI library open_mpi/1.6.5 .Version | Module load command | Additional modules loaded automatically |
---|---|---|
2.10 | module load gcc/4.8.2 open_mpi/1.6.5 namd/2.10 | |
2.10_test | module load gcc/4.8.2 open_mpi/1.6.5 namd/2.10_test |
Please note that this page refers to installations from the old software stack. There are two software stacks on Euler. Newer versions of software are found in the new software stack.
How to submit a job
You need to submit your NAMD calculations in batch mode, through the batch system. To run a simulation with an input file input.in use the following command:sbatch [Slurm options] --wrap="namd2 input.in"
By default you can find the output in the Slurm-XXXXXXX output file where XXXXXXX corresponds to the job ID of the job. You need to replace [Slurm options] with Slurm parameters for the resource requirements of the job. Please find a documentation about the parameters of Slurm on the wiki page about the batch system. For example,
sbatch --time=12:00:00 --mem-per-core=500 --wrap="namd2 input.in"
will request a runtime of up to 12 hours and only 500 MB of RAM.
Jobs that can run on a single node (up to 128 cores) can be submitted in a similar way:
sbatch [Slurm options] --ntasks=N --wrap="charmrun +p N +isomalloc_sync namd2 input.in"Parallel jobs
You can run NAMD on a single node (for example, up to 36 cores on Euler) or over multiple nodes.To run on a single node, request a number of processors, N from the batch system and tell NAMD2 to use the same number of threads:
sbatch [Slurm parameters] --ntasks=N --wrap="charmrun +p N +isomalloc_sync +setcpuaffinity namd2 input.in"
where [Slurm options] are the Slurm parameters for the resource requirements of the job.
You can also run NAMD calculations on more than one node. Since multiple nodes will be used in this case, you must load an MPI module (such as MVAPICH2) before loading the NAMD module. Then use the following command to submit such a job to the batch system:
sbatch [Slurm parameters] --ntasks=N --wrap="mpirun namd2 input.in"where [Slurm options] are the Slurm parameters for the resource requirements of the job.
License information
NAMD licenseNotes
Performance expectations
To better decide how and where to run, the ApoA1 benchmark has been run using NAMD 2.10 for reference. All Euler nodes show similar performance.
Cores | Performance/(ns/day) |
---|---|
1 | 0.10 |
24 | 2.3±0.05 |
Cores | Performance/(ns/day) |
---|---|
1 | 0.10 |
24 | 2.25±0.1 |
48 | 4.3 |
72 | 5.9 |
96 | 7.5 |
144 | 10.9 |