Difference between revisions of "Parallel job submission"

From ScientificComputing
Jump to: navigation, search
(Created page with "__NOTOC__ == Shared memory job (OpenMP) == * Runs on a single compute node * Can use up to 24 processors * Number of processors must be defined in $OMP_NUM_THREADS export OM...")
 
 
(19 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
__NOTOC__
 
__NOTOC__
 +
<table style="width: 100%;">
 +
<tr valign=top>
 +
<td style="width: 30%; text-align:left">
 +
< [[Job submission | Submit a job]]
 +
</td>
 +
<td style="width: 35%; text-align:center">
 +
[[Main_Page | Home]]
 +
</td>
 +
<td style="width: 35%; text-align:right">
 +
[[GPU job submission | Submit a GPU job]] >
 +
</td>
 +
</tr>
 +
</table>
 +
 +
 +
<table>
 +
<tr valign=center>
 
== Shared memory job (OpenMP) ==
 
== Shared memory job (OpenMP) ==
* Runs on a single compute node
+
<td style="width: 25%; background: white;">
* Can use up to 24 processors
+
<br>
* Number of processors must be defined in $OMP_NUM_THREADS
+
[[Image:shared_memory_computing.png|300px]]
 +
<br>
 +
</td>
 +
<td style="width: 75%; background: white;">
 +
In shared-memory parallel computing, multiple processors (or "threads") perform tasks independently but share a common global memory. If a processor modified an array in the memory, all other processors can see this update as well
 +
* A serial code can be parallelized by marking code sections, which should be run in parallel, with [https://www.openmp.org/ openMP directives].
 +
* An openMP code can run on a single compute node and use up to maximum number of processors in that compute node, e.g., 24, 36, or 128 processors.
 +
* To compile an openMP code:
 +
$ module load gcc/6.3.0
 +
$ gcc -o hello_omp -fopenmp hello_omp.c
 +
 
 +
* To run an openMP code, first define number of processors(threads) in $OMP_NUM_THREADS
 +
 
 +
$ export OMP_NUM_THREADS=8
 +
$ bsub -n 8 ./hello_omp
 +
</td>
 +
</tr>
 +
</table>
  
export OMP_NUM_THREADS=8
+
<table>
bsub -n 8 ./program
+
<tr valign=center>
  
 
== Distributed memory job (MPI) ==
 
== Distributed memory job (MPI) ==
* Runs on multiple compute nodes
+
<td style="width: 25%; background: white;">
* Can use tens or even hundreds of processors
+
<br>
* Program must be launched using mpirun
+
[[Image:distributed_memory_computing.png|350px]]
 +
<br>
 +
</td>
 +
<td style="width: 75%; background: white;">
 +
In distributed-memory parallel computing, multiple processors (or "cores") perform tasks independently while each processor possesses its own private memory resource. If a processor modify an array in its memory, this processor has to communicate to other processors so that the others see this update.
 +
* Massage Passing Interface (MPI) library implements distributed memory parallel computing which can be programmed in C, C++ and Fortran
 +
* An MPI program can run on a single compute node as well as on multiple compute nodes
 +
* To compile an MPI code, use MPI wrappers, e.g., mpicc for C code, mpif90 for Fortran code
 +
$ module load gcc/6.3.0 openmpi/4.0.2
 +
$ mpicc -o mpi_hello_world mpi_hello_world.c
 +
* To run an MPI program, launch the program with "mpirun"
 +
$ module load gcc/6.3.0 openmpi/4.0.2
 +
$ bsub -n 240 mpirun ./mpi_hello_world
 +
</td>
 +
</tr>
 +
</table>
  
module load compiler
+
== Examples ==
module load mpi_library
+
* [[openMP hello world in C]]
bsub -n 240 mpirun ./program
+
* [[MPI hello world in C]]
  
== Example ==  
+
== Further readings ==
 +
* [[Parallel computing]]
 +
* [https://sis.id.ethz.ch/services/consultingtraining/mpi_openmp_course.html Four-day course in parallel programming with MPI/OpenMP at ETH]
  
$ export OMP_NUM_THREADS=8
+
== Helper ==
$ bsub -n 8 ./hello_omp
+
* [https://scicomp.ethz.ch/lsf_submission_line_advisor/ LSF Submission Line Advisor]
Generic job.
 
Job <8147290> is submitted to queue <normal.4h>.
 
  
$ unset OMP_NUM_THREADS
 
$ bsub -n 240 mpirun ./hello_mpi
 
MPI job.
 
Your environment is not configured for MPI.
 
Please load the module(s) needed by your job before executing 'bsub'.
 
Request aborted by esub. Job not submitted.
 
  
$ module load intel open_mpi
+
<table style="width: 100%;">
$ bsub -n 240 mpirun ./hello_mpi
+
<tr valign=top>
MPI job.
+
<td style="width: 30%; text-align:left">
Job <8147303> is submitted to queue <normal.4h>.
+
< [[Job submission | Submit a job]]
 +
</td>
 +
<td style="width: 35%; text-align:center">
 +
[[Main Page | Home]]
 +
</td>
 +
<td style="width: 35%; text-align:right">
 +
[[GPU job submission | Submit a GPU job]] >
 +
</td>
 +
</tr>
 +
</table>

Latest revision as of 09:25, 1 October 2021

< Submit a job

Home

Submit a GPU job >


Shared memory job (OpenMP)


Shared memory computing.png

In shared-memory parallel computing, multiple processors (or "threads") perform tasks independently but share a common global memory. If a processor modified an array in the memory, all other processors can see this update as well

  • A serial code can be parallelized by marking code sections, which should be run in parallel, with openMP directives.
  • An openMP code can run on a single compute node and use up to maximum number of processors in that compute node, e.g., 24, 36, or 128 processors.
  • To compile an openMP code:
$ module load gcc/6.3.0
$ gcc -o hello_omp -fopenmp hello_omp.c
  • To run an openMP code, first define number of processors(threads) in $OMP_NUM_THREADS
$ export OMP_NUM_THREADS=8
$ bsub -n 8 ./hello_omp

Distributed memory job (MPI)


Distributed memory computing.png

In distributed-memory parallel computing, multiple processors (or "cores") perform tasks independently while each processor possesses its own private memory resource. If a processor modify an array in its memory, this processor has to communicate to other processors so that the others see this update.

  • Massage Passing Interface (MPI) library implements distributed memory parallel computing which can be programmed in C, C++ and Fortran
  • An MPI program can run on a single compute node as well as on multiple compute nodes
  • To compile an MPI code, use MPI wrappers, e.g., mpicc for C code, mpif90 for Fortran code
$ module load gcc/6.3.0 openmpi/4.0.2
$ mpicc -o mpi_hello_world mpi_hello_world.c
  • To run an MPI program, launch the program with "mpirun"
$ module load gcc/6.3.0 openmpi/4.0.2
$ bsub -n 240 mpirun ./mpi_hello_world

Examples

Further readings

Helper


< Submit a job

Home

Submit a GPU job >