Difference between revisions of "Parallel job submission"
From ScientificComputing
Line 15: | Line 15: | ||
+ | <table> | ||
+ | <tr valign=center> | ||
== Shared memory job (OpenMP) == | == Shared memory job (OpenMP) == | ||
− | + | <td style="width: 25%; background: white;"> | |
− | |||
− | <td style="width: | ||
<br> | <br> | ||
− | [[Image:shared_memory_computing.png| | + | [[Image:shared_memory_computing.png|350px]] |
<br> | <br> | ||
</td> | </td> | ||
− | <td style="width: | + | <td style="width: 75%; background: white;"> |
* In shared-memory parallel computing, multiple processors (or "threads") perform tasks independently but share a common global memory. If a processor modified an array in the memory, all other processors can see this update as well | * In shared-memory parallel computing, multiple processors (or "threads") perform tasks independently but share a common global memory. If a processor modified an array in the memory, all other processors can see this update as well | ||
* A serial code can be parallelized by marking code sections, which should be run in parallel, with [https://www.openmp.org/ openMP directives]. | * A serial code can be parallelized by marking code sections, which should be run in parallel, with [https://www.openmp.org/ openMP directives]. | ||
Line 35: | Line 35: | ||
</table> | </table> | ||
− | + | <table> | |
+ | <tr valign=center> | ||
== Distributed memory job (MPI) == | == Distributed memory job (MPI) == | ||
− | + | <td style="width: 25%; background: white;"> | |
− | |||
− | <td style="width: | ||
<br> | <br> | ||
− | [[Image:distributed_memory_computing.png| | + | [[Image:distributed_memory_computing.png|350px]] |
<br> | <br> | ||
</td> | </td> | ||
− | <td style="width: | + | <td style="width: 75%; background: white;"> |
* In distributed-memory parallel computing, multiple processors (or "cores") perform tasks independently while each processor possesses its own private memory resource. If a processor modify an array in its memory, this processor has to communicate to other processors so that the others see this update. | * In distributed-memory parallel computing, multiple processors (or "cores") perform tasks independently while each processor possesses its own private memory resource. If a processor modify an array in its memory, this processor has to communicate to other processors so that the others see this update. | ||
* Massage Passing Interface (MPI) library implements distributed memory parallel computing which can be programmed in C, C++ and Fortran | * Massage Passing Interface (MPI) library implements distributed memory parallel computing which can be programmed in C, C++ and Fortran | ||
Line 52: | Line 51: | ||
$ module load gcc/6.3.0 openmpi/4.0.2 | $ module load gcc/6.3.0 openmpi/4.0.2 | ||
$ bsub -n 240 mpirun ./program | $ bsub -n 240 mpirun ./program | ||
− | |||
− | |||
</td> | </td> | ||
</tr> | </tr> |
Revision as of 14:29, 17 August 2021
$ export OMP_NUM_THREADS=8 $ bsub -n 8 ./program |
$ module load gcc/6.3.0 openmpi/4.0.2 $ bsub -n 240 mpirun ./program |
Examples
Further readings
Helper