Difference between revisions of "LSF to Slurm quick reference"

From ScientificComputing
Jump to: navigation, search
Line 70: Line 70:
 
| style="width:50%;" | <tt>bbjobs [JOBID]</tt> || <tt>sacct -j JOBID</tt>  
 
| style="width:50%;" | <tt>bbjobs [JOBID]</tt> || <tt>sacct -j JOBID</tt>  
 
|}
 
|}
 +
 +
==Submit a parallel job==
 +
{| class="wikitable" border="1" style="width:80%;text-align:left;"
 +
! LSF !! Slurm
 +
|-
 +
| style="width:50%;" | <tt>bsub -n 256 -R "span[ptile=128]"</tt> || <tt>sbatch -n 256 --ntasks-per-node 128 < jobscript.sh</tt> or <tt>sbatch -n 256 --nodes 2 < jobscript.sh</tt>
 +
|}
 +
The Slurm options
 +
* <tt>--ntasks-per-core</tt>,
 +
* <tt>--cpus-per-task</tt>,
 +
* <tt>--nodes</tt>, and
 +
* <tt>--ntasks-per-core</tt>
 +
are supported.
  
 
==Submit a GPU job==
 
==Submit a GPU job==
Line 77: Line 90:
 
| style="width:50%;" | <tt>bsub -R "rusage[ngpus_excl_p=1]"</tt> || <tt>sbatch --gpus 1 < jobscript.sh</tt>  
 
| style="width:50%;" | <tt>bsub -R "rusage[ngpus_excl_p=1]"</tt> || <tt>sbatch --gpus 1 < jobscript.sh</tt>  
 
|}
 
|}
 +
For multi-node jobs you need to use the <tt>--gpus-per-node</tt> option instead.
  
 
===Submit a job with a specific GPU model===
 
===Submit a job with a specific GPU model===

Revision as of 10:23, 14 July 2022

Introduction

The commands for Slurm are similar to the ones used in LSF. You can find a mapping of the relevant commands below.

Submitting a batch job

LSF Slurm
bsub < jobscript.sh sbatch jobscript.sh
jobscript.sh:
#!/bin/bash

#BSUB -n 4
#BSUB -W 08:00
#BSUB -R "rusage[mem=2000]"
#BSUB -J analysis1
#BSUB -o analysis1.out
#BSUB -e analysis1.err

# load modules
# run command
jobscript.sh:
#!/bin/bash

#SBATCH -n 4
#SBATCH --time=8:00
#SBATCH --mem-per-cpu=2000
#SBATCH --job-name=analysis1
#SBATCH --output=analysis1.out
#SBATCH --error=analysis1.err

# load modules
# run command

Interactive job

LSF Slurm
bsub -Is [LSF options] bash srun --pty bash

Monitoring a job

LSF Slurm
bjobs [JOBID] squeue [-j JOBID]

Killing a job

LSF Slurm
bkill [JOBID] scancel [JOBID]

Environment variables

LSF Slurm
$LSB_JOBID $SLURM_JOB_ID
$LSB_SUBCWD $SLURM_SUBMIT_DIR

Check resource usage of a job

LSF Slurm
bbjobs [JOBID] sacct -j JOBID

Submit a parallel job

LSF Slurm
bsub -n 256 -R "span[ptile=128]" sbatch -n 256 --ntasks-per-node 128 < jobscript.sh or sbatch -n 256 --nodes 2 < jobscript.sh

The Slurm options

  • --ntasks-per-core,
  • --cpus-per-task,
  • --nodes, and
  • --ntasks-per-core

are supported.

Submit a GPU job

LSF Slurm
bsub -R "rusage[ngpus_excl_p=1]" sbatch --gpus 1 < jobscript.sh

For multi-node jobs you need to use the --gpus-per-node option instead.

Submit a job with a specific GPU model

LSF Slurm
bsub -R "rusage[ngpus_excl_p=1]" -R "select[gpu_model0==GeForceGTX1080]" sbatch --gpus gtx_1080:1 < jobscript.sh

Submit a GPU job requiring a given amount of GPU memory

LSF Slurm
bsub -R "rusage[ngpus_excl_p=1]" -R "select[gpu_mtotal0>=10240]" sbatch --gpus 1 --gres gpumem:10g < jobscript.sh