LSF to Slurm quick reference

From ScientificComputing
Revision as of 11:21, 31 August 2022 by Urbanb (talk | contribs) (Submitting simple commands)

Jump to: navigation, search


To set up your current shell's environment for Slurm, simply execute:


The commands for Slurm are similar to the ones used in LSF. You can find a mapping of the relevant commands below.

Submitting simple commands

LSF Slurm
bsub command sbatch --wrap command
bsub "command1 ; command2" sbatch --wrap "command1 ; command2"
bsub "command1 | command2" sbatch --wrap "command1 | command2"

Submitting a script

LSF Slurm
bsub < sbatch

#BSUB -n 4
#BSUB -W 08:00
#BSUB -R "rusage[mem=2000,scratch=1000]"
#BSUB -J analysis1
#BSUB -o analysis1.out
#BSUB -e analysis1.err

# load modules
# run command

#SBATCH -n 4
#SBATCH --time=8:00
#SBATCH --mem-per-cpu=2000
#SBATCH --job-name=analysis1
#SBATCH --output=analysis1.out
#SBATCH --error=analysis1.err
#SBATCH --tmp=4000

# load modules
# run command

Note that in Slurm, you request local scratch (e.g., --tmp=4g) not per core but per node.

Interactive job

LSF Slurm
bsub -Is [LSF options] bash srun --pty bash

Monitoring a job

LSF Slurm
bjobs [JOBID] squeue [-j JOBID]

Killing a job

LSF Slurm
bkill [JOBID] scancel [JOBID]

Environment variables

LSF Slurm

Check resource usage of a job

LSF Slurm
bbjobs [JOBID] sacct -j JOBID

Submit a parallel job

LSF Slurm
bsub -n 256 -R "span[ptile=128]" sbatch -n 256 --ntasks-per-node=128 < or
sbatch -n 256 --nodes=2 <

The Slurm options

  • --ntasks-per-core,
  • --cpus-per-task,
  • --nodes, and
  • --ntasks-per-core

are supported.

Submit a GPU job

LSF Slurm
bsub -R "rusage[ngpus_excl_p=1]" sbatch --gpus=1 <

For multi-node jobs you need to use the --gpus-per-node option instead.

Submit a job with a specific GPU model

LSF Slurm
bsub -R "rusage[ngpus_excl_p=1]" -R "select[gpu_model0==NVIDIAGeForceGTX1080]" sbatch --gpus=gtx_1080:1 <
bsub -R "rusage[ngpus_excl_p=1]" -R "select[gpu_model0==NVIDIAGeForceRTX3090]" sbatch --gpus=rtx_3090:1 <
  • For Slurm, currently the specifiers gtx_1080 and rtx_3090 are supported until we add more GPU types.

Submit a GPU job requiring a given amount of GPU memory

LSF Slurm
bsub -R "rusage[ngpus_excl_p=1]" -R "select[gpu_mtotal0>=10240]" sbatch --gpus=1 --gres=gpumem:10g <

Submit a job using a given share

LSF Slurm
bsub -G es_example sbatch -A es_example
To set as future default:
echo account=es_example > $HOME/.slurm/defaults