Difference between revisions of "Parallel job submission"

From ScientificComputing
Jump to: navigation, search
 
(5 intermediate revisions by the same user not shown)
Line 6: Line 6:
 
</td>
 
</td>
 
<td style="width: 35%; text-align:center">
 
<td style="width: 35%; text-align:center">
[[Sandbox Home | Home]]
+
[[Main_Page | Home]]
 
</td>
 
</td>
 
<td style="width: 35%; text-align:right">
 
<td style="width: 35%; text-align:right">
Line 15: Line 15:
  
  
 +
<table>
 +
<tr valign=center>
 
== Shared memory job (OpenMP) ==
 
== Shared memory job (OpenMP) ==
<table>
+
<td style="width: 25%; background: white;">
<tr valign=top>
 
<td style="width: 35%; background: white;">
 
 
<br>
 
<br>
 
[[Image:shared_memory_computing.png|300px]]
 
[[Image:shared_memory_computing.png|300px]]
 
<br>
 
<br>
 
</td>
 
</td>
<td style="width: 60%; background: white;">
+
<td style="width: 75%; background: white;">
* In shared-memory parallel computing, multiple processors (or "threads") perform tasks independently but share a common global memory. If a processor modified an array in the memory, all other processors can see this update as well
+
In shared-memory parallel computing, multiple processors (or "threads") perform tasks independently but share a common global memory. If a processor modified an array in the memory, all other processors can see this update as well
 
* A serial code can be parallelized by marking code sections, which should be run in parallel, with [https://www.openmp.org/ openMP directives].  
 
* A serial code can be parallelized by marking code sections, which should be run in parallel, with [https://www.openmp.org/ openMP directives].  
 
* An openMP code can run on a single compute node and use up to maximum number of processors in that compute node, e.g., 24, 36, or 128 processors.  
 
* An openMP code can run on a single compute node and use up to maximum number of processors in that compute node, e.g., 24, 36, or 128 processors.  
* To run an openMP code, define number of processors(threads) in $OMP_NUM_THREADS
+
* To compile an openMP code:
 +
$ module load gcc/6.3.0
 +
$ gcc -o hello_omp -fopenmp hello_omp.c
 +
 
 +
* To run an openMP code, first define number of processors(threads) in $OMP_NUM_THREADS
  
 
  $ export OMP_NUM_THREADS=8
 
  $ export OMP_NUM_THREADS=8
  $ bsub -n 8 ./program
+
  $ bsub -n 8 ./hello_omp
 
</td>
 
</td>
 
</tr>
 
</tr>
 
</table>
 
</table>
  
 +
<table>
 +
<tr valign=center>
  
 
== Distributed memory job (MPI) ==
 
== Distributed memory job (MPI) ==
<table>
+
<td style="width: 25%; background: white;">
<tr valign=top>
 
<td style="width: 35%; background: white;">
 
 
<br>
 
<br>
[[Image:distributed_memory_computing.png|300px]]
+
[[Image:distributed_memory_computing.png|350px]]
 
<br>
 
<br>
 
</td>
 
</td>
<td style="width: 60%; background: white;">
+
<td style="width: 75%; background: white;">
* In distributed-memory parallel computing, multiple processors (or "cores") perform tasks independently while each processor possesses its own private memory resource. If a processor modify an array in its memory, this processor has to communicate to other processors so that the others see this update.
+
In distributed-memory parallel computing, multiple processors (or "cores") perform tasks independently while each processor possesses its own private memory resource. If a processor modify an array in its memory, this processor has to communicate to other processors so that the others see this update.
 
* Massage Passing Interface (MPI) library implements distributed memory parallel computing which can be programmed in C, C++ and Fortran
 
* Massage Passing Interface (MPI) library implements distributed memory parallel computing which can be programmed in C, C++ and Fortran
 
* An MPI program can run on a single compute node as well as on multiple compute nodes
 
* An MPI program can run on a single compute node as well as on multiple compute nodes
* An MPI program must be launched using "mpirun"
+
* To compile an MPI code, use MPI wrappers, e.g., mpicc for C code, mpif90 for Fortran code
 
+
$ module load gcc/6.3.0 openmpi/4.0.2
 +
$ mpicc -o mpi_hello_world mpi_hello_world.c
 +
* To run an MPI program, launch the program with "mpirun"
 
  $ module load gcc/6.3.0 openmpi/4.0.2
 
  $ module load gcc/6.3.0 openmpi/4.0.2
  $ bsub -n 240 mpirun ./program
+
  $ bsub -n 240 mpirun ./mpi_hello_world
 
 
 
 
 
</td>
 
</td>
 
</tr>
 
</tr>
Line 76: Line 80:
 
</td>
 
</td>
 
<td style="width: 35%; text-align:center">
 
<td style="width: 35%; text-align:center">
[[Sandbox Home | Home]]
+
[[Main Page | Home]]
 
</td>
 
</td>
 
<td style="width: 35%; text-align:right">
 
<td style="width: 35%; text-align:right">

Latest revision as of 09:25, 1 October 2021

< Submit a job

Home

Submit a GPU job >


Shared memory job (OpenMP)


Shared memory computing.png

In shared-memory parallel computing, multiple processors (or "threads") perform tasks independently but share a common global memory. If a processor modified an array in the memory, all other processors can see this update as well

  • A serial code can be parallelized by marking code sections, which should be run in parallel, with openMP directives.
  • An openMP code can run on a single compute node and use up to maximum number of processors in that compute node, e.g., 24, 36, or 128 processors.
  • To compile an openMP code:
$ module load gcc/6.3.0
$ gcc -o hello_omp -fopenmp hello_omp.c
  • To run an openMP code, first define number of processors(threads) in $OMP_NUM_THREADS
$ export OMP_NUM_THREADS=8
$ bsub -n 8 ./hello_omp

Distributed memory job (MPI)


Distributed memory computing.png

In distributed-memory parallel computing, multiple processors (or "cores") perform tasks independently while each processor possesses its own private memory resource. If a processor modify an array in its memory, this processor has to communicate to other processors so that the others see this update.

  • Massage Passing Interface (MPI) library implements distributed memory parallel computing which can be programmed in C, C++ and Fortran
  • An MPI program can run on a single compute node as well as on multiple compute nodes
  • To compile an MPI code, use MPI wrappers, e.g., mpicc for C code, mpif90 for Fortran code
$ module load gcc/6.3.0 openmpi/4.0.2
$ mpicc -o mpi_hello_world mpi_hello_world.c
  • To run an MPI program, launch the program with "mpirun"
$ module load gcc/6.3.0 openmpi/4.0.2
$ bsub -n 240 mpirun ./mpi_hello_world

Examples

Further readings

Helper


< Submit a job

Home

Submit a GPU job >