Difference between revisions of "LSF to Slurm quick reference"

From ScientificComputing
Jump to: navigation, search
(Share/account for job)
Line 104: Line 104:
 
|-
 
|-
 
| style="width:50%;" | <tt>bsub -R "rusage[ngpus_excl_p=1]" -R "select[gpu_mtotal0>=10240]"</tt> || <tt>sbatch --gpus 1 --gres gpumem:10g < jobscript.sh</tt>  
 
| style="width:50%;" | <tt>bsub -R "rusage[ngpus_excl_p=1]" -R "select[gpu_mtotal0>=10240]"</tt> || <tt>sbatch --gpus 1 --gres gpumem:10g < jobscript.sh</tt>  
 +
|}
 +
 +
==Submit a job using a given share==
 +
{| class="wikitable" border="1" style="width:80%;text-align:left;"
 +
! LSF !! Slurm
 +
|-
 +
| style="width:50%;" | <tt>bsub -G es_example</tt> || <tt>sbatch -A es_example</tt><br>To set as future default:<br><tt>echo account=es_example > $HOME/.slurm/defaults</tt>
 
|}
 
|}

Revision as of 11:15, 21 July 2022

Introduction

The commands for Slurm are similar to the ones used in LSF. You can find a mapping of the relevant commands below.

Submitting a batch job

LSF Slurm
bsub < jobscript.sh sbatch jobscript.sh
jobscript.sh:
#!/bin/bash

#BSUB -n 4
#BSUB -W 08:00
#BSUB -R "rusage[mem=2000]"
#BSUB -J analysis1
#BSUB -o analysis1.out
#BSUB -e analysis1.err

# load modules
# run command
jobscript.sh:
#!/bin/bash

#SBATCH -n 4
#SBATCH --time=8:00
#SBATCH --mem-per-cpu=2000
#SBATCH --job-name=analysis1
#SBATCH --output=analysis1.out
#SBATCH --error=analysis1.err

# load modules
# run command

Interactive job

LSF Slurm
bsub -Is [LSF options] bash srun --pty bash

Monitoring a job

LSF Slurm
bjobs [JOBID] squeue [-j JOBID]

Killing a job

LSF Slurm
bkill [JOBID] scancel [JOBID]

Environment variables

LSF Slurm
$LSB_JOBID $SLURM_JOB_ID
$LSB_SUBCWD $SLURM_SUBMIT_DIR

Check resource usage of a job

LSF Slurm
bbjobs [JOBID] sacct -j JOBID

Submit a parallel job

LSF Slurm
bsub -n 256 -R "span[ptile=128]" sbatch -n 256 --ntasks-per-node 128 < jobscript.sh or
sbatch -n 256 --nodes 2 < jobscript.sh

The Slurm options

  • --ntasks-per-core,
  • --cpus-per-task,
  • --nodes, and
  • --ntasks-per-core

are supported.

Submit a GPU job

LSF Slurm
bsub -R "rusage[ngpus_excl_p=1]" sbatch --gpus 1 < jobscript.sh

For multi-node jobs you need to use the --gpus-per-node option instead.

Submit a job with a specific GPU model

LSF Slurm
bsub -R "rusage[ngpus_excl_p=1]" -R "select[gpu_model0==GeForceGTX1080]" sbatch --gpus gtx_1080:1 < jobscript.sh

Submit a GPU job requiring a given amount of GPU memory

LSF Slurm
bsub -R "rusage[ngpus_excl_p=1]" -R "select[gpu_mtotal0>=10240]" sbatch --gpus 1 --gres gpumem:10g < jobscript.sh

Submit a job using a given share

LSF Slurm
bsub -G es_example sbatch -A es_example
To set as future default:
echo account=es_example > $HOME/.slurm/defaults