CategoryChemistry, Molecular Dynamics
DescriptionNanoscale Molecular Dynamics (NAMD, formerly Not Another Molecular Dynamics Program) is computer software for molecular dynamics simulation, written using the Charm++ parallel programming model. It is noted for its parallel efficiency and is often used to simulate large systems (millions of atoms). It has been developed by the joint collaboration of the Theoretical and Computational Biophysics Group (TCB), and the Parallel Programming Laboratory (PPL), at the University of Illinois at Urbana–Champaign.
|Legacy versions||Supported versions||New versions|
Environment modulesThe namd module is a smart module, which checks for a loaded compiler module and MPI library then loads the corresponding NAMD version. For the module load command example, we use the standard compiler gcc/4.8.2 and the standard MPI library open_mpi/1.6.5 .
|Version||Module load command||Additional modules loaded automatically|
|2.10||module load gcc/4.8.2 open_mpi/1.6.5 namd/2.10|
|2.10_test||module load gcc/4.8.2 open_mpi/1.6.5 namd/2.10_test|
How to submit a jobYou need to submit your NAMD calculations in batch mode, through the batch system. To run a simulation with an input file input.in use the following command:
bsub [LSF options] "namd2 input.in"
By default you can find the output in the lsf.oXXXXXXX output file where XXXXXXX corresponds to the job ID of the job. You need to replace [LSF options] with LSF parameters for the resource requirements of the job. Please find a documentation about the parameters of bsub on the wiki page about the batch system. For example,
bsub -W 12:00 -R "rusage[mem=500]" "namd2 input.in"
will request a runtime of up to 12 hours and only 500 MB of RAM.
Jobs that can run on a single node (24 cores) can be submitted in a similar way:bsub [LSF options] -n N "charmrun +p N +isomalloc_sync namd2 input.in"
Parallel jobsYou can run NAMD on a single node (for example, up to 36 cores on Euler) or over multiple nodes.
To run on a single node, request a number of processors, N from the batch system and tell NAMD2 to use the same number of threads:
bsub [LSF parameters] -n N "charmrun +p N +isomalloc_sync +setcpuaffinity namd2 input.in"
where [LSF options] are the LSF parameters for the resource requirements of the job.
You can also run NAMD calculations on more than one node. Since multiple nodes will be used in this case, you must load an MPI module (such as MVAPICH2) before loading the NAMD module. Then use the following command to submit such a job to the batch system:
bsub [LSF parameters] -n N mpirun "namd2 input.in"where [LSF options] are the LSF parameters for the resource requirements of the job.
License informationNAMD license
To better decide how and where to run, the ApoA1 benchmark has been run using NAMD 2.10 for reference. All Euler nodes show similar performance.