LSF to Slurm quick reference

From ScientificComputing
Revision as of 15:26, 13 July 2022 by Urbanb (talk | contribs)

Jump to: navigation, search


The commands for Slurm are similar to the ones used in LSF. You can find a mapping of the relevant commands below.

Submitting a batch job

LSF Slurm
bsub < sbatch

#BSUB -n 4
#BSUB -W 08:00
#BSUB -R "rusage[mem=2000]"
#BSUB -J analysis1
#BSUB -o analysis1.out
#BSUB -e analysis1.err

# load modules
# run command

#SBATCH -n 4
#SBATCH --time=8:00
#SBATCH --mem-per-cpu=2000
#SBATCH --job-name=analysis1
#SBATCH --output=analysis1.out
#SBATCH --error=analysis1.err

# load modules
# run command

Interactive job

LSF Slurm
bsub -Is [LSF options] bash srun --pty bash

Monitoring a job

LSF Slurm
bjobs [JOBID] squeue [-j JOBID]

Killing a job

LSF Slurm
bkill [JOBID] scancel [JOBID]

Environment variables

LSF Slurm

Check resource usage of a job

LSF Slurm
bbjobs [JOBID] sacct -j JOBID

Request GPUs

LSF Slurm
bsub -R "rusage[ngpus_excl_p=1]" sbatch --gpus 1 <

Request a specific model of GPU

LSF Slurm
bsub -R "rusage[ngpus_excl_p=1]" -R "select[gpu_model0==GeForceGTX1080]" sbatch --gpus gtx_1080:1 <

Request GPUs with a given amount of GPU memory

LSF Slurm
bsub -R "rusage[ngpus_excl_p=1]" -R "select[gpu_mtotal0>=10240]" sbatch --gpus 1 --gres gpumem:10g <