Getting started with GPUs

From ScientificComputing
Revision as of 14:04, 9 September 2019 by Urbanb (talk | contribs) (Adds the RTX 2080 Tis)

Jump to: navigation, search

Introduction

Currently we only provide GPUs in the Leonhard Cluster, where access is restricted to Shareholders. Therefore the instructions on this wiki page are only referring to the Leonhard cluster.

How to submit a GPU job

All GPUs in Leonhard are configured in Exclusive Process mode. The GPU nodes have 20 cores, 8 GPUs, and 256 GB of RAM (of which only about 210 GB is usable). To run multi-node job, you will need to request span[ptile=20].

The LSF batch system has partial integrated support for GPUs. To use the GPUs for a job node you need to request the ngpus_excl_p resource. It refers to the number of GPUs per node. This is unlike other resources, which are requested per core.

For example, to run a serial job with one GPU,

bsub -R "rusage[ngpus_excl_p=1]" ./my_cuda_program

or on a full node with all eight GPUs and up to 90 GB of RAM,

bsub -n 20 -R "rusage[mem=4500,ngpus_excl_p=8]" ./my_cuda_program

or on two full nodes:

bsub -n 40 -R "rusage[mem=4500,ngpus_excl_p=8] span[ptile=20]" ./my_cuda_program

While your jobs will see all GPUs, LSF will set the CUDA_VISIBLE_DEVICES environment variable, which is honored by CUDA programs.

Python and GPUs

Because certain Python packages need different installations for their CPU and GPU versions, we decided to have separate Python installations with regards to using CPUs and GPUs. For instance running the GPU version of TensorFlow on a CPU node will immediately crash, because TensorFlow is checking on start up if the compute node has a GPU driver.

CPU version GPU version
module load python_cpu/3.6.1 module load python_gpu/3.6.1

Tensorflow example

As an example for running a TensorFlow job on a GPU node, we are printing out the TensorFlow version, the string Hello TensorFlow! and the result of a simple matrix multiplication:

[leonhard@lo-login-01 ~]$ cd testrun/python
[leonhard@lo-login-01 python]$ module load python_gpu/2.7.13
[leonhard@lo-login-01 python]$ cat tftest1.py
#/usr/bin/env python
from __future__ import print_function
import tensorflow as tf

vers = tf.__version__
print(vers)
hello = tf.constant('Hello, TensorFlow!')
matrix1 = tf.constant([[3., 3.]])
matrix2 = tf.constant([[2.],[2.]])
product = tf.matmul(matrix1, matrix2)

sess = tf.Session()
print(sess.run(hello))
print(sess.run(product))
sess.close()
[leonhard@lo-login-01 python]$ bsub -n 1 -W 4:00 -R "rusage[mem=2048, ngpus_excl_p=1]" python tftest1.py
Generic job.
Job <10620> is submitted to queue <gpu.4h>.
[leonhard@lo-login-01 python]$ bjobs
JOBID      USER      STAT  QUEUE      FROM_HOST   EXEC_HOST   JOB_NAME   SUBMIT_TIME
10620      leonhard  PEND  gpu.4h     lo-login-01             *tftest.py Sep 28 08:02
[leonhard@lo-login-01 python]$ bjobs
JOBID      USER      STAT  QUEUE      FROM_HOST   EXEC_HOST   JOB_NAME   SUBMIT_TIME
10620      leonhard  RUN   gpu.4h     lo-login-01 lo-gtx-001  *ftest1.py Sep 28 08:03
[leonhard@lo-login-01 python]$ bjobs
No unfinished job found
[leonhard@lo-login-01 python]$ grep -A3 "Creating TensorFlow device" lsf.o10620
2017-09-28 08:08:43.235886: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1080, pci bus id: 0000:04:00.0)
1.3.0
Hello, TensorFlow!
 12.
[leonhard@lo-login-01 python]$

Please note, that your job will crash if you are running the GPU version of TensorFlow on a CPU node, because TensorFlow is checking on start up if the compute node has a GPU driver.

How to select GPU memory

If you know that you will need more memory on a GPU than some models provide, i.e., more than 8 GB, then you can request that your job will run only on GPUs that have enough memory. Use the gpu_mtotal0 host selection to do this. For example, if you need 10 GB (=10240  MB) per GPU:

 [leonhard@lo-login-01 ~]$ bsub -R "rusage[ngpus_excl_p=1]" -R "select[gpu_mtotal0>=10240]" ./my_cuda_program

This ensures your job will not run on GPUs with less than 10 GB of GPU memory. The most memory capacities of different GPUs are

GPU Model GPU Memory
NVIDIA GeForce GTX 1080 8 GiB
NVIDIA GeForce GTX 1080 Ti 11 GiB
NVIDIA GeForce RTX 2080 Ti 11 GiB
NVIDIA Tesla V100-SXM2 32 GB 32 GiB


How to select a GPU model

In some cases it is desirable or necessary to select the GPU model on which your job runs, for example if you know you code runs much faster on a newer model. However, you should consider that by narrowing down the list of allowable GPUs, your job may need to wait for a longer time.

To select a certain GPU model, add the -R "select[gpu_model1==GPU_MODEL]" resource requirement to bsub,

[leonhard@lo-login-01 ~]$ bsub -R "rusage[ngpus_excl_p=1]" -R "select[gpu_model0==GeForceGTX1080]" ./my_cuda_program

The list of possible GPU models you can specify are

GPU Model Specifier
NVIDIA GeForce GTX 1080 GeForceGTX1080
NVIDIA GeForce GTX 1080 Ti GeForceGTX1080Ti
NVIDIA GeForce RTX 2080 Ti GeForceRTX2080Ti
NVIDIA Tesla V100-SXM2 32 GB TeslaV100_SXM2_32GB