Difference between revisions of "Parallel job submission"
From ScientificComputing
(→Example) |
|||
(16 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
__NOTOC__ | __NOTOC__ | ||
+ | <table style="width: 100%;"> | ||
+ | <tr valign=top> | ||
+ | <td style="width: 30%; text-align:left"> | ||
+ | < [[Job submission | Submit a job]] | ||
+ | </td> | ||
+ | <td style="width: 35%; text-align:center"> | ||
+ | [[Main_Page | Home]] | ||
+ | </td> | ||
+ | <td style="width: 35%; text-align:right"> | ||
+ | [[GPU job submission | Submit a GPU job]] > | ||
+ | </td> | ||
+ | </tr> | ||
+ | </table> | ||
+ | |||
+ | |||
+ | <table> | ||
+ | <tr valign=center> | ||
== Shared memory job (OpenMP) == | == Shared memory job (OpenMP) == | ||
− | * | + | <td style="width: 25%; background: white;"> |
− | + | <br> | |
− | * | + | [[Image:shared_memory_computing.png|300px]] |
+ | <br> | ||
+ | </td> | ||
+ | <td style="width: 75%; background: white;"> | ||
+ | In shared-memory parallel computing, multiple processors (or "threads") perform tasks independently but share a common global memory. If a processor modified an array in the memory, all other processors can see this update as well | ||
+ | * A serial code can be parallelized by marking code sections, which should be run in parallel, with [https://www.openmp.org/ openMP directives]. | ||
+ | * An openMP code can run on a single compute node and use up to maximum number of processors in that compute node, e.g., 24, 36, or 128 processors. | ||
+ | * To compile an openMP code: | ||
+ | $ module load gcc/6.3.0 | ||
+ | $ gcc -o hello_omp -fopenmp hello_omp.c | ||
− | + | * To run an openMP code, first define number of processors(threads) in $OMP_NUM_THREADS | |
− | |||
− | = | + | $ export OMP_NUM_THREADS=8 |
− | + | $ bsub -n 8 ./hello_omp | |
− | + | </td> | |
− | + | </tr> | |
+ | </table> | ||
− | + | <table> | |
− | + | <tr valign=center> | |
− | |||
− | == | + | == Distributed memory job (MPI) == |
+ | <td style="width: 25%; background: white;"> | ||
+ | <br> | ||
+ | [[Image:distributed_memory_computing.png|350px]] | ||
+ | <br> | ||
+ | </td> | ||
+ | <td style="width: 75%; background: white;"> | ||
+ | In distributed-memory parallel computing, multiple processors (or "cores") perform tasks independently while each processor possesses its own private memory resource. If a processor modify an array in its memory, this processor has to communicate to other processors so that the others see this update. | ||
+ | * Massage Passing Interface (MPI) library implements distributed memory parallel computing which can be programmed in C, C++ and Fortran | ||
+ | * An MPI program can run on a single compute node as well as on multiple compute nodes | ||
+ | * To compile an MPI code, use MPI wrappers, e.g., mpicc for C code, mpif90 for Fortran code | ||
+ | $ module load gcc/6.3.0 openmpi/4.0.2 | ||
+ | $ mpicc -o mpi_hello_world mpi_hello_world.c | ||
+ | * To run an MPI program, launch the program with "mpirun" | ||
+ | $ module load gcc/6.3.0 openmpi/4.0.2 | ||
+ | $ bsub -n 240 mpirun ./mpi_hello_world | ||
+ | </td> | ||
+ | </tr> | ||
+ | </table> | ||
− | + | == Examples == | |
− | + | * [[openMP hello world in C]] | |
− | + | * [[MPI hello world in C]] | |
− | |||
− | + | == Further readings == | |
− | + | * [[Parallel computing]] | |
− | + | * [https://sis.id.ethz.ch/services/consultingtraining/mpi_openmp_course.html Four-day course in parallel programming with MPI/OpenMP at ETH] | |
− | |||
− | |||
− | |||
− | + | == Helper == | |
− | + | * [https://scicomp.ethz.ch/lsf_submission_line_advisor/ LSF Submission Line Advisor] | |
− | |||
− | |||
Line 43: | Line 80: | ||
</td> | </td> | ||
<td style="width: 35%; text-align:center"> | <td style="width: 35%; text-align:center"> | ||
− | [[ | + | [[Main Page | Home]] |
</td> | </td> | ||
<td style="width: 35%; text-align:right"> | <td style="width: 35%; text-align:right"> | ||
− | [[ | + | [[GPU job submission | Submit a GPU job]] > |
</td> | </td> | ||
</tr> | </tr> | ||
</table> | </table> |
Latest revision as of 11:25, 1 October 2021
In shared-memory parallel computing, multiple processors (or "threads") perform tasks independently but share a common global memory. If a processor modified an array in the memory, all other processors can see this update as well
$ module load gcc/6.3.0 $ gcc -o hello_omp -fopenmp hello_omp.c
$ export OMP_NUM_THREADS=8 $ bsub -n 8 ./hello_omp |
In distributed-memory parallel computing, multiple processors (or "cores") perform tasks independently while each processor possesses its own private memory resource. If a processor modify an array in its memory, this processor has to communicate to other processors so that the others see this update.
$ module load gcc/6.3.0 openmpi/4.0.2 $ mpicc -o mpi_hello_world mpi_hello_world.c
$ module load gcc/6.3.0 openmpi/4.0.2 $ bsub -n 240 mpirun ./mpi_hello_world |
Examples
Further readings
Helper