Difference between revisions of "LSF to Slurm quick reference"

From ScientificComputing
Jump to: navigation, search
(Monitoring a job)
 
(13 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
==Introduction==
 
==Introduction==
The commands for slurm are similar to the ones used in LSF. You can find a mapping of the relevant commands below.
+
The commands for Slurm are similar to the ones used in LSF. You can find a mapping of the relevant commands below.
  
==Submitting a job==
+
Typing '''slurm''' in your shell will prepare your environment for Slurm.
{| class="wikitable" border="1" style="width:50%;text-align:left;"
+
 
 +
==Submitting a batch job==
 +
{| class="wikitable" border="1" style="width:80%;text-align:left;"
 
! LSF !! Slurm
 
! LSF !! Slurm
 
|-
 
|-
Line 25: Line 27:
 
  #SBATCH -n 4
 
  #SBATCH -n 4
 
  #SBATCH --time=8:00
 
  #SBATCH --time=8:00
  #SBATCH --mem-per-cpu=2G
+
  #SBATCH --mem-per-cpu=2000
 
  #SBATCH --job-name=analysis1
 
  #SBATCH --job-name=analysis1
 
  #SBATCH --output=analysis1.out
 
  #SBATCH --output=analysis1.out
Line 35: Line 37:
  
 
==Interactive job==
 
==Interactive job==
{| class="wikitable" border="1" style="width:50%;text-align:left;"
+
{| class="wikitable" border="1" style="width:80%;text-align:left;"
 
! LSF !! Slurm
 
! LSF !! Slurm
 
|-
 
|-
Line 42: Line 44:
  
 
==Monitoring a job==
 
==Monitoring a job==
{| class="wikitable" border="1" style="width:50%;text-align:left;"
+
{| class="wikitable" border="1" style="width:80%;text-align:left;"
 
! LSF !! Slurm
 
! LSF !! Slurm
 
|-
 
|-
| style="width:50%;" | <tt>bjobs [JOBID]</tt> || <tt>squeue -j [JOBID]</tt>  
+
| style="width:50%;" | <tt>bjobs [JOBID]</tt> || <tt>squeue [-j JOBID]</tt>  
 
|}
 
|}
  
 
==Killing a job==
 
==Killing a job==
{| class="wikitable" border="1" style="width:50%;text-align:left;"
+
{| class="wikitable" border="1" style="width:80%;text-align:left;"
 
! LSF !! Slurm
 
! LSF !! Slurm
 
|-
 
|-
Line 56: Line 58:
  
 
==Environment variables==
 
==Environment variables==
{| class="wikitable" border="1" style="width:50%;text-align:left;"
+
{| class="wikitable" border="1" style="width:80%;text-align:left;"
 
! LSF !! Slurm  
 
! LSF !! Slurm  
 
|-
 
|-
| style="width:50%;" | <tt>$LSB_JOBID</tt> || <tt>$SLURM_JOBID</tt>
+
| style="width:50%;" | <tt>$LSB_JOBID</tt> || <tt>$SLURM_JOB_ID</tt>
 
|-
 
|-
 
| <tt>$LSB_SUBCWD</tt> || <tt>$SLURM_SUBMIT_DIR</tt>
 
| <tt>$LSB_SUBCWD</tt> || <tt>$SLURM_SUBMIT_DIR</tt>
 +
|}
 +
 +
==Check resource usage of a job==
 +
{| class="wikitable" border="1" style="width:80%;text-align:left;"
 +
! LSF !! Slurm
 +
|-
 +
| style="width:50%;" | <tt>bbjobs [JOBID]</tt> || <tt>sacct -j JOBID</tt>
 +
|}
 +
 +
==Submit a parallel job==
 +
{| class="wikitable" border="1" style="width:80%;text-align:left;"
 +
! LSF !! Slurm
 +
|-
 +
| style="width:50%;" | <tt>bsub -n 256 -R "span[ptile=128]"</tt> || <tt>sbatch -n 256 --ntasks-per-node 128 < jobscript.sh</tt> or<br><tt>sbatch -n 256 --nodes 2 < jobscript.sh</tt>
 +
|}
 +
The Slurm options
 +
* <tt>--ntasks-per-core</tt>,
 +
* <tt>--cpus-per-task</tt>,
 +
* <tt>--nodes</tt>, and
 +
* <tt>--ntasks-per-core</tt>
 +
are supported.
 +
 +
==Submit a GPU job==
 +
{| class="wikitable" border="1" style="width:80%;text-align:left;"
 +
! LSF !! Slurm
 +
|-
 +
| style="width:50%;" | <tt>bsub -R "rusage[ngpus_excl_p=1]"</tt> || <tt>sbatch --gpus 1 < jobscript.sh</tt>
 +
|}
 +
For multi-node jobs you need to use the <tt>--gpus-per-node</tt> option instead.
 +
 +
===Submit a job with a specific GPU model===
 +
{| class="wikitable" border="1" style="width:80%;text-align:left;"
 +
! LSF !! Slurm
 +
|-
 +
| style="width:50%;" | <tt>bsub -R "rusage[ngpus_excl_p=1]" -R "select[gpu_model0==GeForceGTX1080]"</tt> || <tt>sbatch --gpus gtx_1080:1 < jobscript.sh</tt>
 +
|}
 +
 +
===Submit a GPU job requiring a given amount of GPU memory===
 +
{| class="wikitable" border="1" style="width:80%;text-align:left;"
 +
! LSF !! Slurm
 +
|-
 +
| style="width:50%;" | <tt>bsub -R "rusage[ngpus_excl_p=1]" -R "select[gpu_mtotal0>=10240]"</tt> || <tt>sbatch --gpus 1 --gres gpumem:10g < jobscript.sh</tt>
 +
|}
 +
 +
==Submit a job using a given share==
 +
{| class="wikitable" border="1" style="width:80%;text-align:left;"
 +
! LSF !! Slurm
 +
|-
 +
| style="width:50%;" | <tt>bsub -G es_example</tt> || <tt>sbatch -A es_example</tt><br>To set as future default:<br><tt>echo account=es_example > $HOME/.slurm/defaults</tt>
 
|}
 
|}

Latest revision as of 15:25, 21 July 2022

Introduction

The commands for Slurm are similar to the ones used in LSF. You can find a mapping of the relevant commands below.

Typing slurm in your shell will prepare your environment for Slurm.

Submitting a batch job

LSF Slurm
bsub < jobscript.sh sbatch jobscript.sh
jobscript.sh:
#!/bin/bash

#BSUB -n 4
#BSUB -W 08:00
#BSUB -R "rusage[mem=2000]"
#BSUB -J analysis1
#BSUB -o analysis1.out
#BSUB -e analysis1.err

# load modules
# run command
jobscript.sh:
#!/bin/bash

#SBATCH -n 4
#SBATCH --time=8:00
#SBATCH --mem-per-cpu=2000
#SBATCH --job-name=analysis1
#SBATCH --output=analysis1.out
#SBATCH --error=analysis1.err

# load modules
# run command

Interactive job

LSF Slurm
bsub -Is [LSF options] bash srun --pty bash

Monitoring a job

LSF Slurm
bjobs [JOBID] squeue [-j JOBID]

Killing a job

LSF Slurm
bkill [JOBID] scancel [JOBID]

Environment variables

LSF Slurm
$LSB_JOBID $SLURM_JOB_ID
$LSB_SUBCWD $SLURM_SUBMIT_DIR

Check resource usage of a job

LSF Slurm
bbjobs [JOBID] sacct -j JOBID

Submit a parallel job

LSF Slurm
bsub -n 256 -R "span[ptile=128]" sbatch -n 256 --ntasks-per-node 128 < jobscript.sh or
sbatch -n 256 --nodes 2 < jobscript.sh

The Slurm options

  • --ntasks-per-core,
  • --cpus-per-task,
  • --nodes, and
  • --ntasks-per-core

are supported.

Submit a GPU job

LSF Slurm
bsub -R "rusage[ngpus_excl_p=1]" sbatch --gpus 1 < jobscript.sh

For multi-node jobs you need to use the --gpus-per-node option instead.

Submit a job with a specific GPU model

LSF Slurm
bsub -R "rusage[ngpus_excl_p=1]" -R "select[gpu_model0==GeForceGTX1080]" sbatch --gpus gtx_1080:1 < jobscript.sh

Submit a GPU job requiring a given amount of GPU memory

LSF Slurm
bsub -R "rusage[ngpus_excl_p=1]" -R "select[gpu_mtotal0>=10240]" sbatch --gpus 1 --gres gpumem:10g < jobscript.sh

Submit a job using a given share

LSF Slurm
bsub -G es_example sbatch -A es_example
To set as future default:
echo account=es_example > $HOME/.slurm/defaults