Introduction
The commands for Slurm are similar to the ones used in LSF. You can find a mapping of the relevant commands below.
Submitting a batch job
LSF |
Slurm
|
bsub < jobscript.sh |
sbatch jobscript.sh
|
jobscript.sh:
#!/bin/bash
#BSUB -n 4
#BSUB -W 08:00
#BSUB -R "rusage[mem=2000]"
#BSUB -J analysis1
#BSUB -o analysis1.out
#BSUB -e analysis1.err
# load modules
# run command
|
jobscript.sh:
#!/bin/bash
#SBATCH -n 4
#SBATCH --time=8:00
#SBATCH --mem-per-cpu=2000
#SBATCH --job-name=analysis1
#SBATCH --output=analysis1.out
#SBATCH --error=analysis1.err
# load modules
# run command
|
Interactive job
LSF |
Slurm
|
bsub -Is [LSF options] bash |
srun --pty bash
|
Monitoring a job
LSF |
Slurm
|
bjobs [JOBID] |
squeue [-j JOBID]
|
Killing a job
LSF |
Slurm
|
bkill [JOBID] |
scancel [JOBID]
|
Environment variables
LSF |
Slurm
|
$LSB_JOBID |
$SLURM_JOB_ID
|
$LSB_SUBCWD |
$SLURM_SUBMIT_DIR
|
Check resource usage of a job
LSF |
Slurm
|
bbjobs [JOBID] |
sacct -j JOBID
|
Request GPUs
LSF |
Slurm
|
bsub -R "rusage[ngpus_excl_p=1]" |
sbatch --gpus 1 < jobscript.sh
|
Request a specific model of GPU
LSF |
Slurm
|
bsub -R "rusage[ngpus_excl_p=1]" -R "select[gpu_model0==GeForceGTX1080]" |
sbatch --gpus gtx_1080:1 < jobscript.sh
|
Request GPUs with a given amount of GPU memory
LSF |
Slurm
|
bsub -R "rusage[ngpus_excl_p=1]" -R "select[gpu_mtotal0>=10240]" |
sbatch --gpus 1 --gres gpumem:10g < jobscript.sh
|