Difference between revisions of "Distributed computing in R with Rmpi"

From ScientificComputing
Jump to: navigation, search
(Use Rmpi)
 
(10 intermediate revisions by 2 users not shown)
Line 2: Line 2:
 
{{back_to_tutorials}}
 
{{back_to_tutorials}}
  
== Load modules ==
+
== Load modules and install Rmpi ==
 
Change to the new software stack and load required modules. Here we need MPI and R libraries.
 
Change to the new software stack and load required modules. Here we need MPI and R libraries.
 
  $ env2lmod
 
  $ env2lmod
  $ module load gcc/6.3.0 openmpi/4.0.2 r/4.0.2
+
  $ module load gcc/8.2.0 openmpi/4.1.4 r/4.1.3
 +
$ R
 +
> install.packages("Rmpi")
  
== Run R in an interactive session ==
+
== Request an interactive session ==
Rmpi assigns one processor to be the master and other processors to be workers. Here we would like to use 4 processors for computation. Therefore, we request 5 processors
+
Rmpi assigns one processor to be the master and other processors to be workers. Here we would like to use 5 processors on 2 nodes for computation. Therefore, we request 6 processors
  $ bsub -n 5 -W 02:00 -I bash
 
  Generic job.
 
  Job <155200980> is submitted to queue <normal.4h>.
 
  <<Waiting for dispatch ...>>
 
  <<Starting on eu-c7-105-05>>
 
  
Define available global number of processors with the environment parameter MPI_UNIVERSE_SIZE.
+
$ bsub -n 6 -R "span[ptile=3]" -Is bash
  $ export MPI_UNIVERSE_SIZE=5
+
Generic job.
 
+
Job <225427996> is submitted to queue <normal.4h>.
Start R
+
<<Waiting for dispatch ...>>
  $ R
 
  >
 
  
 
== Use Rmpi ==
 
== Use Rmpi ==
1. Load Rmpi which calls mpi.initialize()
+
Create an R script called ''test_rmpi.R''
  > library(Rmpi)
+
# Load Rmpi which calls mpi.initialize()
 
+
library(Rmpi)
2. Spawn R-slaves to the host. nslaves = requested number of processors - 1
+
  > usize <- as.numeric(Sys.getenv("MPI_UNIVERSE_SIZE"))
+
# Spawn R-slaves to the host. nslaves = requested number of processors - 1
  > ns <- usize - 1
+
usize <- mpi_universe_size()
  > mpi.spawn.Rslaves(nslaves=ns)
+
ns <- usize - 1
 
+
mpi.spawn.Rslaves(nslaves=ns)
3. Set up a variable array
+
  > var = c(11.0, 22.0, 33.0)
+
# Set up a variable array
 
+
var = c(11.0, 22.0, 33.0, 44.0, 55.0)
4. Root sends state variables and parameters to other ranks
+
  > mpi.bcast.data2slave(var, comm = 1, buffunit = 100)
+
# Root sends state variables and parameters to other ranks
 
+
mpi.bcast.data2slave(var, comm = 1, buffunit = 100)
5. Get the rank number of that processor
+
# Get the rank number of that processor
  > mpi.bcast.cmd(id <- mpi.comm.rank())
+
mpi.bcast.cmd(id <- mpi.comm.rank())
 
+
# Check if each rank can use its own value
6. Check if each rank can use its own value
+
mpi.remote.exec(paste("The variable on rank ",id," is ", var[id]))
  > mpi.remote.exec(paste("The variable on rank ",id," is ", var[id]))
+
 
+
# Root orders other ranks to calculate
7. Root orders other ranks to calculate
+
mpi.bcast.cmd(output <- var[id]*2)
  > mpi.bcast.cmd(output <- var[id]*2)
+
# Root orders other ranks to gather the output
 
+
mpi.bcast.cmd(mpi.gather(output, 2, double(1)))
8. Root orders other ranks to gather the output
+
  > mpi.bcast.cmd(mpi.gather(output, 2, double(1)))
+
# Root gathers the output from other ranks
 +
mpi.gather(double(1), 2, double(usize))
 +
 +
# Close down and quit
 +
mpi.close.Rslaves(dellog = FALSE)
 +
mpi.quit()
  
9. Root gathers the output from other ranks
+
Run the script with '''mpirun'''
  > mpi.gather(double(1), 2, double(usize))
 
  
10. Close down and quit
+
$ mpirun -np 1 Rscript test_rmpi.R
  > mpi.close.Rslaves(dellog = FALSE)
+
        5 slaves are spawned successfully. 0 failed.
  > mpi.quit()
+
master (rank 0, comm 1) of size 6 is running on: eu-g1-016-2
 +
slave1 (rank 1, comm 1) of size 6 is running on: eu-g1-016-2
 +
slave2 (rank 2, comm 1) of size 6 is running on: eu-g1-016-2
 +
slave3 (rank 3, comm 1) of size 6 is running on: eu-g1-019-4
 +
slave4 (rank 4, comm 1) of size 6 is running on: eu-g1-019-4
 +
slave5 (rank 5, comm 1) of size 6 is running on: eu-g1-019-4
 +
$slave1
 +
[1] "The variable on rank  1  is  11"
 +
 +
$slave2
 +
[1] "The variable on rank  2  is  22"
 +
 +
$slave3
 +
[1] "The variable on rank  3  is  33"
 +
 +
$slave4
 +
[1] "The variable on rank  4  is  44"
 +
 +
$slave5
 +
[1] "The variable on rank  5  is  55"
 +
 +
[1]  0  22  44  66  88 110
 +
[1] 1
  
 +
<!--
 
== Exercises ==
 
== Exercises ==
  
# Try replacing mpi.scatter.Robj() instead of mpi.bcast.data2slave() in point 4
+
# Try replacing mpi.scatter.Robj() instead of mpi.bcast.data2slave()
 
# Create an R script using Rmpi and submit a batch job through BSUB command line
 
# Create an R script using Rmpi and submit a batch job through BSUB command line
 
# Create a BSUB job script and submit a batch job
 
# Create a BSUB job script and submit a batch job
 +
-->
  
 
== Further reading ==
 
== Further reading ==

Latest revision as of 11:19, 15 July 2022

< Examples

Load modules and install Rmpi

Change to the new software stack and load required modules. Here we need MPI and R libraries.

$ env2lmod
$ module load gcc/8.2.0 openmpi/4.1.4 r/4.1.3
$ R
> install.packages("Rmpi")

Request an interactive session

Rmpi assigns one processor to be the master and other processors to be workers. Here we would like to use 5 processors on 2 nodes for computation. Therefore, we request 6 processors

$ bsub -n 6 -R "span[ptile=3]" -Is bash
Generic job.
Job <225427996> is submitted to queue <normal.4h>.
<<Waiting for dispatch ...>>

Use Rmpi

Create an R script called test_rmpi.R

# Load Rmpi which calls mpi.initialize()
library(Rmpi)

# Spawn R-slaves to the host. nslaves = requested number of processors - 1
usize <- mpi_universe_size()
ns <- usize - 1
mpi.spawn.Rslaves(nslaves=ns)

# Set up a variable array
var = c(11.0, 22.0, 33.0, 44.0, 55.0)

# Root sends state variables and parameters to other ranks
mpi.bcast.data2slave(var, comm = 1, buffunit = 100)
# Get the rank number of that processor
mpi.bcast.cmd(id <- mpi.comm.rank())
# Check if each rank can use its own value
mpi.remote.exec(paste("The variable on rank ",id," is ", var[id]))

# Root orders other ranks to calculate
mpi.bcast.cmd(output <- var[id]*2)
# Root orders other ranks to gather the output
mpi.bcast.cmd(mpi.gather(output, 2, double(1)))

# Root gathers the output from other ranks
mpi.gather(double(1), 2, double(usize))

# Close down and quit
mpi.close.Rslaves(dellog = FALSE)
mpi.quit()

Run the script with mpirun

$ mpirun -np 1 Rscript test_rmpi.R
        5 slaves are spawned successfully. 0 failed.
master (rank 0, comm 1) of size 6 is running on: eu-g1-016-2
slave1 (rank 1, comm 1) of size 6 is running on: eu-g1-016-2
slave2 (rank 2, comm 1) of size 6 is running on: eu-g1-016-2
slave3 (rank 3, comm 1) of size 6 is running on: eu-g1-019-4
slave4 (rank 4, comm 1) of size 6 is running on: eu-g1-019-4
slave5 (rank 5, comm 1) of size 6 is running on: eu-g1-019-4
$slave1
[1] "The variable on rank  1  is  11"

$slave2
[1] "The variable on rank  2  is  22"

$slave3
[1] "The variable on rank  3  is  33"

$slave4
[1] "The variable on rank  4  is  44"

$slave5
[1] "The variable on rank  5  is  55"

[1]   0  22  44  66  88 110
[1] 1


Further reading

https://cran.r-project.org/web/packages/Rmpi/Rmpi.pdf

< Examples