Difference between revisions of "Distributed computing in R with Rmpi"

From ScientificComputing
Jump to: navigation, search
(Use Rmpi)
 
(3 intermediate revisions by 2 users not shown)
Line 5: Line 5:
 
Change to the new software stack and load required modules. Here we need MPI and R libraries.
 
Change to the new software stack and load required modules. Here we need MPI and R libraries.
 
  $ env2lmod
 
  $ env2lmod
  $ module load gcc/6.3.0 openmpi/2.1.1 r/4.0.2
+
  $ module load gcc/8.2.0 openmpi/4.1.4 r/4.1.3
 
  $ R
 
  $ R
 
  > install.packages("Rmpi")
 
  > install.packages("Rmpi")
Line 11: Line 11:
 
== Request an interactive session ==
 
== Request an interactive session ==
 
Rmpi assigns one processor to be the master and other processors to be workers. Here we would like to use 5 processors on 2 nodes for computation. Therefore, we request 6 processors
 
Rmpi assigns one processor to be the master and other processors to be workers. Here we would like to use 5 processors on 2 nodes for computation. Therefore, we request 6 processors
  $ bsub -n 6 -R "span[ptile=3]" -Is bash
+
 
  Generic job.
+
$ bsub -n 6 -R "span[ptile=3]" -Is bash
  Job <155200980> is submitted to queue <normal.4h>.
+
Generic job.
  <<Waiting for dispatch ...>>
+
Job <225427996> is submitted to queue <normal.4h>.
  <<Starting on eu-c7-105-05>>
+
<<Waiting for dispatch ...>>
  
 
== Use Rmpi ==
 
== Use Rmpi ==
Line 50: Line 50:
  
 
Run the script with '''mpirun'''
 
Run the script with '''mpirun'''
 +
 
  $ mpirun -np 1 Rscript test_rmpi.R
 
  $ mpirun -np 1 Rscript test_rmpi.R
    5 slaves are spawned successfully. 0 failed.
+
        5 slaves are spawned successfully. 0 failed.
  master (rank 0, comm 1) of size 6 is running on: eu-c7-103-05
+
  master (rank 0, comm 1) of size 6 is running on: eu-g1-016-2
  slave1 (rank 1, comm 1) of size 6 is running on: eu-c7-103-05
+
  slave1 (rank 1, comm 1) of size 6 is running on: eu-g1-016-2
  slave2 (rank 2, comm 1) of size 6 is running on: eu-c7-103-05
+
  slave2 (rank 2, comm 1) of size 6 is running on: eu-g1-016-2
  slave3 (rank 3, comm 1) of size 6 is running on: eu-c7-103-11
+
  slave3 (rank 3, comm 1) of size 6 is running on: eu-g1-019-4
  slave4 (rank 4, comm 1) of size 6 is running on: eu-c7-103-11
+
  slave4 (rank 4, comm 1) of size 6 is running on: eu-g1-019-4
  slave5 (rank 5, comm 1) of size 6 is running on: eu-c7-103-11
+
  slave5 (rank 5, comm 1) of size 6 is running on: eu-g1-019-4
[1] "MPI is initialized."
 
$slave1
 
[1] "I am 1 of 6"
 
 
$slave2
 
[1] "I am 2 of 6"
 
 
$slave3
 
[1] "I am 3 of 6"
 
 
$slave4
 
[1] "I am 4 of 6"
 
 
$slave5
 
[1] "I am 5 of 6"
 
 
 
  $slave1
 
  $slave1
 
  [1] "The variable on rank  1  is  11"
 
  [1] "The variable on rank  1  is  11"
Line 92: Line 77:
 
  [1] 1
 
  [1] 1
  
 +
<!--
 
== Exercises ==
 
== Exercises ==
  
Line 97: Line 83:
 
# Create an R script using Rmpi and submit a batch job through BSUB command line
 
# Create an R script using Rmpi and submit a batch job through BSUB command line
 
# Create a BSUB job script and submit a batch job
 
# Create a BSUB job script and submit a batch job
 +
-->
  
 
== Further reading ==
 
== Further reading ==

Latest revision as of 09:19, 15 July 2022

< Examples

Load modules and install Rmpi

Change to the new software stack and load required modules. Here we need MPI and R libraries.

$ env2lmod
$ module load gcc/8.2.0 openmpi/4.1.4 r/4.1.3
$ R
> install.packages("Rmpi")

Request an interactive session

Rmpi assigns one processor to be the master and other processors to be workers. Here we would like to use 5 processors on 2 nodes for computation. Therefore, we request 6 processors

$ bsub -n 6 -R "span[ptile=3]" -Is bash
Generic job.
Job <225427996> is submitted to queue <normal.4h>.
<<Waiting for dispatch ...>>

Use Rmpi

Create an R script called test_rmpi.R

# Load Rmpi which calls mpi.initialize()
library(Rmpi)

# Spawn R-slaves to the host. nslaves = requested number of processors - 1
usize <- mpi_universe_size()
ns <- usize - 1
mpi.spawn.Rslaves(nslaves=ns)

# Set up a variable array
var = c(11.0, 22.0, 33.0, 44.0, 55.0)

# Root sends state variables and parameters to other ranks
mpi.bcast.data2slave(var, comm = 1, buffunit = 100)
# Get the rank number of that processor
mpi.bcast.cmd(id <- mpi.comm.rank())
# Check if each rank can use its own value
mpi.remote.exec(paste("The variable on rank ",id," is ", var[id]))

# Root orders other ranks to calculate
mpi.bcast.cmd(output <- var[id]*2)
# Root orders other ranks to gather the output
mpi.bcast.cmd(mpi.gather(output, 2, double(1)))

# Root gathers the output from other ranks
mpi.gather(double(1), 2, double(usize))

# Close down and quit
mpi.close.Rslaves(dellog = FALSE)
mpi.quit()

Run the script with mpirun

$ mpirun -np 1 Rscript test_rmpi.R
        5 slaves are spawned successfully. 0 failed.
master (rank 0, comm 1) of size 6 is running on: eu-g1-016-2
slave1 (rank 1, comm 1) of size 6 is running on: eu-g1-016-2
slave2 (rank 2, comm 1) of size 6 is running on: eu-g1-016-2
slave3 (rank 3, comm 1) of size 6 is running on: eu-g1-019-4
slave4 (rank 4, comm 1) of size 6 is running on: eu-g1-019-4
slave5 (rank 5, comm 1) of size 6 is running on: eu-g1-019-4
$slave1
[1] "The variable on rank  1  is  11"

$slave2
[1] "The variable on rank  2  is  22"

$slave3
[1] "The variable on rank  3  is  33"

$slave4
[1] "The variable on rank  4  is  44"

$slave5
[1] "The variable on rank  5  is  55"

[1]   0  22  44  66  88 110
[1] 1


Further reading

https://cran.r-project.org/web/packages/Rmpi/Rmpi.pdf

< Examples