Difference between revisions of "Parallel job submission"
From ScientificComputing
(5 intermediate revisions by the same user not shown) | |||
Line 6: | Line 6: | ||
</td> | </td> | ||
<td style="width: 35%; text-align:center"> | <td style="width: 35%; text-align:center"> | ||
− | [[ | + | [[Main_Page | Home]] |
</td> | </td> | ||
<td style="width: 35%; text-align:right"> | <td style="width: 35%; text-align:right"> | ||
Line 15: | Line 15: | ||
+ | <table> | ||
+ | <tr valign=center> | ||
== Shared memory job (OpenMP) == | == Shared memory job (OpenMP) == | ||
− | + | <td style="width: 25%; background: white;"> | |
− | |||
− | <td style="width: | ||
<br> | <br> | ||
[[Image:shared_memory_computing.png|300px]] | [[Image:shared_memory_computing.png|300px]] | ||
<br> | <br> | ||
</td> | </td> | ||
− | <td style="width: | + | <td style="width: 75%; background: white;"> |
− | + | In shared-memory parallel computing, multiple processors (or "threads") perform tasks independently but share a common global memory. If a processor modified an array in the memory, all other processors can see this update as well | |
* A serial code can be parallelized by marking code sections, which should be run in parallel, with [https://www.openmp.org/ openMP directives]. | * A serial code can be parallelized by marking code sections, which should be run in parallel, with [https://www.openmp.org/ openMP directives]. | ||
* An openMP code can run on a single compute node and use up to maximum number of processors in that compute node, e.g., 24, 36, or 128 processors. | * An openMP code can run on a single compute node and use up to maximum number of processors in that compute node, e.g., 24, 36, or 128 processors. | ||
− | * To run an openMP code, define number of processors(threads) in $OMP_NUM_THREADS | + | * To compile an openMP code: |
+ | $ module load gcc/6.3.0 | ||
+ | $ gcc -o hello_omp -fopenmp hello_omp.c | ||
+ | |||
+ | * To run an openMP code, first define number of processors(threads) in $OMP_NUM_THREADS | ||
$ export OMP_NUM_THREADS=8 | $ export OMP_NUM_THREADS=8 | ||
− | $ bsub -n 8 ./ | + | $ bsub -n 8 ./hello_omp |
</td> | </td> | ||
</tr> | </tr> | ||
</table> | </table> | ||
+ | <table> | ||
+ | <tr valign=center> | ||
== Distributed memory job (MPI) == | == Distributed memory job (MPI) == | ||
− | + | <td style="width: 25%; background: white;"> | |
− | |||
− | <td style="width: | ||
<br> | <br> | ||
− | [[Image:distributed_memory_computing.png| | + | [[Image:distributed_memory_computing.png|350px]] |
<br> | <br> | ||
</td> | </td> | ||
− | <td style="width: | + | <td style="width: 75%; background: white;"> |
− | + | In distributed-memory parallel computing, multiple processors (or "cores") perform tasks independently while each processor possesses its own private memory resource. If a processor modify an array in its memory, this processor has to communicate to other processors so that the others see this update. | |
* Massage Passing Interface (MPI) library implements distributed memory parallel computing which can be programmed in C, C++ and Fortran | * Massage Passing Interface (MPI) library implements distributed memory parallel computing which can be programmed in C, C++ and Fortran | ||
* An MPI program can run on a single compute node as well as on multiple compute nodes | * An MPI program can run on a single compute node as well as on multiple compute nodes | ||
− | * | + | * To compile an MPI code, use MPI wrappers, e.g., mpicc for C code, mpif90 for Fortran code |
− | + | $ module load gcc/6.3.0 openmpi/4.0.2 | |
+ | $ mpicc -o mpi_hello_world mpi_hello_world.c | ||
+ | * To run an MPI program, launch the program with "mpirun" | ||
$ module load gcc/6.3.0 openmpi/4.0.2 | $ module load gcc/6.3.0 openmpi/4.0.2 | ||
− | $ bsub -n 240 mpirun ./ | + | $ bsub -n 240 mpirun ./mpi_hello_world |
− | |||
− | |||
</td> | </td> | ||
</tr> | </tr> | ||
Line 76: | Line 80: | ||
</td> | </td> | ||
<td style="width: 35%; text-align:center"> | <td style="width: 35%; text-align:center"> | ||
− | [[ | + | [[Main Page | Home]] |
</td> | </td> | ||
<td style="width: 35%; text-align:right"> | <td style="width: 35%; text-align:right"> |
Latest revision as of 09:25, 1 October 2021
In shared-memory parallel computing, multiple processors (or "threads") perform tasks independently but share a common global memory. If a processor modified an array in the memory, all other processors can see this update as well
$ module load gcc/6.3.0 $ gcc -o hello_omp -fopenmp hello_omp.c
$ export OMP_NUM_THREADS=8 $ bsub -n 8 ./hello_omp |
In distributed-memory parallel computing, multiple processors (or "cores") perform tasks independently while each processor possesses its own private memory resource. If a processor modify an array in its memory, this processor has to communicate to other processors so that the others see this update.
$ module load gcc/6.3.0 openmpi/4.0.2 $ mpicc -o mpi_hello_world mpi_hello_world.c
$ module load gcc/6.3.0 openmpi/4.0.2 $ bsub -n 240 mpirun ./mpi_hello_world |
Examples
Further readings
Helper