Difference between revisions of "Parallel job submission"

From ScientificComputing
Jump to: navigation, search
Line 24: Line 24:
 
</td>
 
</td>
 
<td style="width: 75%; background: white;">
 
<td style="width: 75%; background: white;">
* In shared-memory parallel computing, multiple processors (or "threads") perform tasks independently but share a common global memory. If a processor modified an array in the memory, all other processors can see this update as well
+
In shared-memory parallel computing, multiple processors (or "threads") perform tasks independently but share a common global memory. If a processor modified an array in the memory, all other processors can see this update as well
 
* A serial code can be parallelized by marking code sections, which should be run in parallel, with [https://www.openmp.org/ openMP directives].  
 
* A serial code can be parallelized by marking code sections, which should be run in parallel, with [https://www.openmp.org/ openMP directives].  
 
* An openMP code can run on a single compute node and use up to maximum number of processors in that compute node, e.g., 24, 36, or 128 processors.  
 
* An openMP code can run on a single compute node and use up to maximum number of processors in that compute node, e.g., 24, 36, or 128 processors.  
* To run an openMP code, define number of processors(threads) in $OMP_NUM_THREADS
+
* To compile an openMP code:
 +
$ module load gcc/6.3.0
 +
$ gcc -o hello_omp -fopenmp hello_omp.c
 +
 
 +
* To run an openMP code, first define number of processors(threads) in $OMP_NUM_THREADS
  
 
  $ export OMP_NUM_THREADS=8
 
  $ export OMP_NUM_THREADS=8
  $ bsub -n 8 ./program
+
  $ bsub -n 8 ./hello_omp
 
</td>
 
</td>
 
</tr>
 
</tr>
Line 44: Line 48:
 
</td>
 
</td>
 
<td style="width: 75%; background: white;">
 
<td style="width: 75%; background: white;">
* In distributed-memory parallel computing, multiple processors (or "cores") perform tasks independently while each processor possesses its own private memory resource. If a processor modify an array in its memory, this processor has to communicate to other processors so that the others see this update.
+
In distributed-memory parallel computing, multiple processors (or "cores") perform tasks independently while each processor possesses its own private memory resource. If a processor modify an array in its memory, this processor has to communicate to other processors so that the others see this update.
 
* Massage Passing Interface (MPI) library implements distributed memory parallel computing which can be programmed in C, C++ and Fortran
 
* Massage Passing Interface (MPI) library implements distributed memory parallel computing which can be programmed in C, C++ and Fortran
 
* An MPI program can run on a single compute node as well as on multiple compute nodes
 
* An MPI program can run on a single compute node as well as on multiple compute nodes

Revision as of 12:44, 17 August 2021

< Submit a job

Home

Submit a GPU job >


Shared memory job (OpenMP)


Shared memory computing.png

In shared-memory parallel computing, multiple processors (or "threads") perform tasks independently but share a common global memory. If a processor modified an array in the memory, all other processors can see this update as well

  • A serial code can be parallelized by marking code sections, which should be run in parallel, with openMP directives.
  • An openMP code can run on a single compute node and use up to maximum number of processors in that compute node, e.g., 24, 36, or 128 processors.
  • To compile an openMP code:
$ module load gcc/6.3.0
$ gcc -o hello_omp -fopenmp hello_omp.c
  • To run an openMP code, first define number of processors(threads) in $OMP_NUM_THREADS
$ export OMP_NUM_THREADS=8
$ bsub -n 8 ./hello_omp

Distributed memory job (MPI)


Distributed memory computing.png

In distributed-memory parallel computing, multiple processors (or "cores") perform tasks independently while each processor possesses its own private memory resource. If a processor modify an array in its memory, this processor has to communicate to other processors so that the others see this update.

  • Massage Passing Interface (MPI) library implements distributed memory parallel computing which can be programmed in C, C++ and Fortran
  • An MPI program can run on a single compute node as well as on multiple compute nodes
  • An MPI program must be launched using "mpirun"
$ module load gcc/6.3.0 openmpi/4.0.2
$ bsub -n 240 mpirun ./program

Examples

Further readings

Helper


< Submit a job

Home

Submit a GPU job >