Difference between revisions of "Parallel job submission"
From ScientificComputing
(2 intermediate revisions by the same user not shown) | |||
Line 6: | Line 6: | ||
</td> | </td> | ||
<td style="width: 35%; text-align:center"> | <td style="width: 35%; text-align:center"> | ||
− | [[ | + | [[Main_Page | Home]] |
</td> | </td> | ||
<td style="width: 35%; text-align:right"> | <td style="width: 35%; text-align:right"> | ||
Line 20: | Line 20: | ||
<td style="width: 25%; background: white;"> | <td style="width: 25%; background: white;"> | ||
<br> | <br> | ||
− | [[Image:shared_memory_computing.png| | + | [[Image:shared_memory_computing.png|300px]] |
<br> | <br> | ||
</td> | </td> | ||
Line 41: | Line 41: | ||
<table> | <table> | ||
<tr valign=center> | <tr valign=center> | ||
+ | |||
== Distributed memory job (MPI) == | == Distributed memory job (MPI) == | ||
<td style="width: 25%; background: white;"> | <td style="width: 25%; background: white;"> | ||
Line 51: | Line 52: | ||
* Massage Passing Interface (MPI) library implements distributed memory parallel computing which can be programmed in C, C++ and Fortran | * Massage Passing Interface (MPI) library implements distributed memory parallel computing which can be programmed in C, C++ and Fortran | ||
* An MPI program can run on a single compute node as well as on multiple compute nodes | * An MPI program can run on a single compute node as well as on multiple compute nodes | ||
− | * To compile an MPI code, use MPI | + | * To compile an MPI code, use MPI wrappers, e.g., mpicc for C code, mpif90 for Fortran code |
$ module load gcc/6.3.0 openmpi/4.0.2 | $ module load gcc/6.3.0 openmpi/4.0.2 | ||
$ mpicc -o mpi_hello_world mpi_hello_world.c | $ mpicc -o mpi_hello_world mpi_hello_world.c | ||
Line 79: | Line 80: | ||
</td> | </td> | ||
<td style="width: 35%; text-align:center"> | <td style="width: 35%; text-align:center"> | ||
− | [[ | + | [[Main Page | Home]] |
</td> | </td> | ||
<td style="width: 35%; text-align:right"> | <td style="width: 35%; text-align:right"> |
Latest revision as of 09:25, 1 October 2021
In shared-memory parallel computing, multiple processors (or "threads") perform tasks independently but share a common global memory. If a processor modified an array in the memory, all other processors can see this update as well
$ module load gcc/6.3.0 $ gcc -o hello_omp -fopenmp hello_omp.c
$ export OMP_NUM_THREADS=8 $ bsub -n 8 ./hello_omp |
In distributed-memory parallel computing, multiple processors (or "cores") perform tasks independently while each processor possesses its own private memory resource. If a processor modify an array in its memory, this processor has to communicate to other processors so that the others see this update.
$ module load gcc/6.3.0 openmpi/4.0.2 $ mpicc -o mpi_hello_world mpi_hello_world.c
$ module load gcc/6.3.0 openmpi/4.0.2 $ bsub -n 240 mpirun ./mpi_hello_world |
Examples
Further readings
Helper