Difference between revisions of "Parallel job submission"

From ScientificComputing
Jump to: navigation, search
Line 15: Line 15:
  
  
 +
<table>
 +
<tr valign=center>
 
== Shared memory job (OpenMP) ==
 
== Shared memory job (OpenMP) ==
<table>
+
<td style="width: 25%; background: white;">
<tr valign=top>
 
<td style="width: 35%; background: white;">
 
 
<br>
 
<br>
[[Image:shared_memory_computing.png|300px]]
+
[[Image:shared_memory_computing.png|350px]]
 
<br>
 
<br>
 
</td>
 
</td>
<td style="width: 60%; background: white;">
+
<td style="width: 75%; background: white;">
 
* In shared-memory parallel computing, multiple processors (or "threads") perform tasks independently but share a common global memory. If a processor modified an array in the memory, all other processors can see this update as well
 
* In shared-memory parallel computing, multiple processors (or "threads") perform tasks independently but share a common global memory. If a processor modified an array in the memory, all other processors can see this update as well
 
* A serial code can be parallelized by marking code sections, which should be run in parallel, with [https://www.openmp.org/ openMP directives].  
 
* A serial code can be parallelized by marking code sections, which should be run in parallel, with [https://www.openmp.org/ openMP directives].  
Line 35: Line 35:
 
</table>
 
</table>
  
 
+
<table>
 +
<tr valign=center>
 
== Distributed memory job (MPI) ==
 
== Distributed memory job (MPI) ==
<table>
+
<td style="width: 25%; background: white;">
<tr valign=top>
 
<td style="width: 35%; background: white;">
 
 
<br>
 
<br>
[[Image:distributed_memory_computing.png|300px]]
+
[[Image:distributed_memory_computing.png|350px]]
 
<br>
 
<br>
 
</td>
 
</td>
<td style="width: 60%; background: white;">
+
<td style="width: 75%; background: white;">
 
* In distributed-memory parallel computing, multiple processors (or "cores") perform tasks independently while each processor possesses its own private memory resource. If a processor modify an array in its memory, this processor has to communicate to other processors so that the others see this update.
 
* In distributed-memory parallel computing, multiple processors (or "cores") perform tasks independently while each processor possesses its own private memory resource. If a processor modify an array in its memory, this processor has to communicate to other processors so that the others see this update.
 
* Massage Passing Interface (MPI) library implements distributed memory parallel computing which can be programmed in C, C++ and Fortran
 
* Massage Passing Interface (MPI) library implements distributed memory parallel computing which can be programmed in C, C++ and Fortran
Line 52: Line 51:
 
  $ module load gcc/6.3.0 openmpi/4.0.2
 
  $ module load gcc/6.3.0 openmpi/4.0.2
 
  $ bsub -n 240 mpirun ./program
 
  $ bsub -n 240 mpirun ./program
 
 
 
</td>
 
</td>
 
</tr>
 
</tr>

Revision as of 14:29, 17 August 2021

< Submit a job

Home

Submit a GPU job >


Shared memory job (OpenMP)


Shared memory computing.png

  • In shared-memory parallel computing, multiple processors (or "threads") perform tasks independently but share a common global memory. If a processor modified an array in the memory, all other processors can see this update as well
  • A serial code can be parallelized by marking code sections, which should be run in parallel, with openMP directives.
  • An openMP code can run on a single compute node and use up to maximum number of processors in that compute node, e.g., 24, 36, or 128 processors.
  • To run an openMP code, define number of processors(threads) in $OMP_NUM_THREADS
$ export OMP_NUM_THREADS=8
$ bsub -n 8 ./program

Distributed memory job (MPI)


Distributed memory computing.png

  • In distributed-memory parallel computing, multiple processors (or "cores") perform tasks independently while each processor possesses its own private memory resource. If a processor modify an array in its memory, this processor has to communicate to other processors so that the others see this update.
  • Massage Passing Interface (MPI) library implements distributed memory parallel computing which can be programmed in C, C++ and Fortran
  • An MPI program can run on a single compute node as well as on multiple compute nodes
  • An MPI program must be launched using "mpirun"
$ module load gcc/6.3.0 openmpi/4.0.2
$ bsub -n 240 mpirun ./program

Examples

Further readings

Helper


< Submit a job

Home

Submit a GPU job >