Difference between revisions of "Getting started with GPUs"

From ScientificComputing
Jump to: navigation, search
(Python and GPUs)
 
(5 intermediate revisions by the same user not shown)
Line 1: Line 1:
 
==Introduction==
 
==Introduction==
There are GPU nodes in both, the Euler and the Leonhard Open cluster. The GPU nodes on both cluster are reserved exclusively to the shareholder groups that invested into them. Guest users and shareholder that purchase CPU nodes but no GPU nodes cannot use the GPU nodes.
+
There are GPU nodes in the Euler cluster. The GPU nodes are reserved exclusively to the shareholder groups that invested into them. Guest users and shareholder that purchase CPU nodes but no GPU nodes cannot use the GPU nodes.
 +
 
 +
==CUDA and cuDNN==
 +
cuDNN versions provided are compiled for a particular CUDA version. We will soon add here a table with the compatible versions
  
 
==How to submit a GPU job==
 
==How to submit a GPU job==
Line 17: Line 20:
  
 
==Sofware with GPU support==
 
==Sofware with GPU support==
On Leonhard open, the [[Leonhard_applications_and_libraries|software stack]] contains packages with GPU support. On Euler, packages with GPU support are only available in the [[Euler_applications_and_libraries|new software stack]]. None of the packages in the old software stack on Euler has support for GPUs.
+
On Euler, packages with GPU support are only available in the [[Euler_applications_and_libraries|new software stack]]. None of the packages in the old software stack on Euler has support for GPUs.
  
 
==Available GPU node types==
 
==Available GPU node types==
 
===Euler===
 
===Euler===
{| class="wikitable"
+
{{GPUTable}}
|-
 
! GPU Model !! Specifier !! GPU memory per GPU !! CPU cores per node !! CPU memory per node
 
|-
 
| NVIDIA GeForce RTX 2080 Ti || <tt>GeForceRTX2080Ti</tt> || 11&nbsp;GiB || 36 || 384&nbsp;GiB
 
|-
 
| NVIDIA GeForce RTX 2080 Ti || <tt>GeForceRTX2080Ti</tt> || 11&nbsp;GiB || 128 || 512&nbsp;GiB
 
|-
 
| NVIDIA TITAN RTX || <tt>TITANRTX</tt> || 24&nbsp; GiB || 128 || 512&nbsp;GiB
 
|-
 
| NVIDIA Tesla V100-SXM2 32 GB || <tt>TeslaV100_SXM2_32GB</tt> || 32&nbsp;GiB || 48 || 768&nbsp;GiB
 
|-
 
| NVIDIA Tesla A100 || <tt> A100_PCIE_40GB </tt> || 40&nbsp;GiB || 48 || 768&nbsp;GiB
 
|}
 
 
 
===Leonhard Open===
 
{| class="wikitable"
 
|-
 
! GPU Model !! Specifier !! GPU memory per GPU !! CPU cores per node !! CPU memory per node
 
|-
 
| NVIDIA GeForce GTX 1080 || <tt>GeForceGTX1080</tt> || 8&nbsp;GiB || 20 || 256&nbsp;GiB
 
|-
 
| NVIDIA GeForce GTX 1080 Ti || <tt>GeForceGTX1080Ti</tt> || 11&nbsp;GiB || 20 || 256&nbsp;GiB
 
|-
 
| NVIDIA GeForce RTX 2080 Ti || <tt>GeForceRTX2080Ti</tt> || 11&nbsp;GiB || 36 || 384&nbsp;GiB
 
|-
 
| [[Nvidia_DGX-1_with_Tensor_Cores| NVIDIA Tesla V100-SXM2 32 GB]] || <tt>TeslaV100_SXM2_32GB</tt> || 32&nbsp;GiB || 40 || 512&nbsp;GiB
 
|}
 
  
 
== How to select GPU memory ==
 
== How to select GPU memory ==
Line 65: Line 41:
 
  [sfux@lo-login-01 ~]$ '''bsub -R "rusage[ngpus_excl_p=1]" -R "select[gpu_model0==GeForceGTX1080]" ./my_cuda_program'''
 
  [sfux@lo-login-01 ~]$ '''bsub -R "rusage[ngpus_excl_p=1]" -R "select[gpu_model0==GeForceGTX1080]" ./my_cuda_program'''
  
 +
<!--
 
==Python and GPUs==
 
==Python and GPUs==
 
Because some Python packages need different installations for their CPU and GPU versions, we decided to have separate Python modules (python/XXX and python_gpu/XXX) with regards to using CPUs and GPUs. The python_gpu modules will in addition automatically load a CUDA and a CUDNN module. When running the GPU version of TensorFlow (<2.0.0) or PyTorch on a CPU node will immediately crash, because those packages are checking on start up if the compute node has a GPU driver installed. From TensorFlow 2.0.0 on, google merged the CPU and the GPU version of TensorFlow into a single package, but for PyTorch there are still two installations (CPU/GPU) required.
 
Because some Python packages need different installations for their CPU and GPU versions, we decided to have separate Python modules (python/XXX and python_gpu/XXX) with regards to using CPUs and GPUs. The python_gpu modules will in addition automatically load a CUDA and a CUDNN module. When running the GPU version of TensorFlow (<2.0.0) or PyTorch on a CPU node will immediately crash, because those packages are checking on start up if the compute node has a GPU driver installed. From TensorFlow 2.0.0 on, google merged the CPU and the GPU version of TensorFlow into a single package, but for PyTorch there are still two installations (CPU/GPU) required.
 
For an overview on the available Python and TensorFlow versions on Leonhard  Open, please have a look at [[Python_on_Leonhard|Python on Leonhard]]
 
  
 
===Tensorflow 1.x example===
 
===Tensorflow 1.x example===
Line 168: Line 143:
 
  [sfux@lo-login-01 test2]$
 
  [sfux@lo-login-01 test2]$
  
With TensorFlow 2.0 it is possible to build a single Python package that supports CPU and GPU. If TensorFlow 2.0 is imported on a pure CPU compute node, it will no longer fail due to checking the GPU driver as it will fall back to the CPU version in this case.
+
With TensorFlow 2.0 it is possible to build a single Python package that supports CPU and GPU. If TensorFlow 2.0 is imported on a pure CPU compute node, it will no longer fail due to checking the GPU driver as it will fall back to the CPU version in this case.-->

Latest revision as of 06:22, 18 August 2022

Introduction

There are GPU nodes in the Euler cluster. The GPU nodes are reserved exclusively to the shareholder groups that invested into them. Guest users and shareholder that purchase CPU nodes but no GPU nodes cannot use the GPU nodes.

CUDA and cuDNN

cuDNN versions provided are compiled for a particular CUDA version. We will soon add here a table with the compatible versions

How to submit a GPU job

All GPUs are configured in Exclusive Process mode. To run multi-node job, you will need to request span[ptile=XX] with XX being the number of CPU cores per GPU node, which is depending on the node type (the node types are listed in the table below).

The LSF batch system has partial integrated support for GPUs. To use the GPUs for a job node you need to request the ngpus_excl_p resource. It refers to the number of GPUs per node. This is unlike other resources, which are requested per core.

For example, to run a serial job with one GPU,

bsub -R "rusage[ngpus_excl_p=1]" ./my_cuda_program

or on a full node with all 8 GeForce GTX 1080 Ti GPUs and up to 90 GB of RAM,

bsub -n 20 -R "rusage[mem=4500,ngpus_excl_p=8]" -R "select[gpu_model0==GeForceGTX1080Ti]" ./my_cuda_program

or on two full nodes:

bsub -n 40 -R "rusage[mem=4500,ngpus_excl_p=8]" -R "select[gpu_model0==GeForceGTX1080Ti]" -R "span[ptile=20]" ./my_cuda_program

While your jobs will see all GPUs, LSF will set the CUDA_VISIBLE_DEVICES environment variable, which is honored by CUDA programs.

Sofware with GPU support

On Euler, packages with GPU support are only available in the new software stack. None of the packages in the old software stack on Euler has support for GPUs.

Available GPU node types

Euler

GPU Model Specifier (GPU driver <= 450.80.02) Specifier (GPU driver > 450.80.02) GPU memory per GPU CPU cores per node CPU memory per node
NVIDIA GeForce GTX 1080 GeForceGTX1080 NVIDIAGeForceGTX1080 8 GiB 20 256 GiB
NVIDIA GeForce GTX 1080 Ti GeForceGTX1080Ti NVIDIAGeForceGTX1080Ti 11 GiB 20 256 GiB
NVIDIA GeForce RTX 2080 Ti GeForceRTX2080Ti NVIDIAGeForceRTX2080Ti 11 GiB 36 384 GiB
NVIDIA GeForce RTX 2080 Ti GeForceRTX2080Ti NVIDIAGeForceRTX2080Ti 11 GiB 128 512 GiB
NVIDIA GeForce RTX 3090 NVIDIAGeForceRTX3090 24 GiB 128 512 GiB
NVIDIA TITAN RTX TITANRTX NVIDIATITANRTX 24 GiB 128 512 GiB
NVIDIA Quadro RTX 6000 QuadroRTX6000 QuadroRTX6000 24 GiB 128 512 GiB
NVIDIA Tesla V100-SXM2 32 GB TeslaV100_SXM2_32GB TeslaV100_SXM2_32GB 32 GiB 48 768 GiB
NVIDIA Tesla V100-SXM2 32 GB TeslaV100_SXM2_32GB TeslaV100_SXM2_32GB 32 GiB 40 512 GiB
Nvidia Tesla A100 A100_PCIE_40GB NVIDIAA100_PCIE_40GB 40 GiB 48 768 GiB

Please note that the update of the GPU driver is a rolling update. For GPU node types where all nodes have already the updated driver version, the old identifier is crossed-out in the table above. Don't use crossed-out identifiers, as your job will be pending forever as LSF cannot find nodes with GPUs that have those identifiers.

How to select GPU memory

If you know that you will need more memory on a GPU than some models provide, i.e., more than 8 GB, then you can request that your job will run only on GPUs that have enough memory. Use the gpu_mtotal0 host selection to do this. For example, if you need 10 GB (=10240  MB) per GPU:

 [sfux@lo-login-01 ~]$ bsub -R "rusage[ngpus_excl_p=1]" -R "select[gpu_mtotal0>=10240]" ./my_cuda_program

This ensures your job will not run on GPUs with less than 10 GB of GPU memory.

How to select a GPU model

In some cases it is desirable or necessary to select the GPU model on which your job runs, for example if you know you code runs much faster on a newer model. However, you should consider that by narrowing down the list of allowable GPUs, your job may need to wait for a longer time.

To select a certain GPU model, add the -R "select[gpu_model1==GPU_MODEL]" resource requirement to bsub,

[sfux@lo-login-01 ~]$ bsub -R "rusage[ngpus_excl_p=1]" -R "select[gpu_model0==GeForceGTX1080]" ./my_cuda_program