Difference between revisions of "GPU job submission"

From ScientificComputing
Jump to: navigation, search
Line 3: Line 3:
 
<tr valign=top>
 
<tr valign=top>
 
<td style="width: 30%; text-align:left">
 
<td style="width: 30%; text-align:left">
< [[Job submission | Submit a job]]
+
< [[Parallel job submission | Submit a parallel job]]
 
</td>
 
</td>
 
<td style="width: 35%; text-align:center">
 
<td style="width: 35%; text-align:center">
Line 33: Line 33:
 
<tr valign=top>
 
<tr valign=top>
 
<td style="width: 30%; text-align:left">
 
<td style="width: 30%; text-align:left">
< [[Job submission | Submit a job]]
+
< [[Parallel job submission | Submit a parallel job]]
 
</td>
 
</td>
 
<td style="width: 35%; text-align:center">
 
<td style="width: 35%; text-align:center">

Revision as of 10:07, 14 June 2021

< Submit a parallel job

Home

Monitor a job >

To use the GPUs for a job node you need to request the ngpus_excl_p resource. It refers to the number of GPUs per node. This is unlike other resources, which are requested per core.

For example, to run a serial job with one GPU,

bsub -R "rusage[ngpus_excl_p=1]" ./my_cuda_program

or on a full node with all 8 GeForce GTX 1080 Ti GPUs and up to 90 GB of RAM,

bsub -n 20 -R "rusage[mem=4500,ngpus_excl_p=8]" -R "select[gpu_model0==GeForceGTX1080Ti]" ./my_cuda_program

or on two full nodes:

bsub -n 40 -R "rusage[mem=4500,ngpus_excl_p=8]" -R "select[gpu_model0==GeForceGTX1080Ti]" -R "span[ptile=20]" ./my_cuda_program

While your jobs will see all GPUs, LSF will set the CUDA_VISIBLE_DEVICES environment variable, which is honored by CUDA programs.

Further reading


< Submit a parallel job

Home

Monitor a job >