The following nodes have the indicated GPUs available. They can be requested by node as shown in the following example. You must module load the appropriate cuda module to enable the libs necessary to talk to the GPUs. Use module av cuda to see which ones are available.

Include the following SGE directive in your qsub script.

#!/bin/bash

# SGE directives
# ...

# specify which node you want in the format of Q

#$ -q queue_name
#    ie
#$ -q  gpu2

#    or node@Q

#$ -q node@queue_name
#    ie:
#$ -q compute-5-5@gpu1080

module load cuda/9.0  # for example

# rest of your code
# ...
Table 1. GPU nodes on HPC
Node # of GPUs Nvidia Model SGE Q name

compute-1-14

4

GF110GL [Tesla M2090] (rev a1)

gpu

compute-4-17

1

GK110BGL [Tesla K40m] (rev a1)

free40i (s)

compute-4-18

4

GK110B [GeForce GTX TITAN Z] (rev a1)

ik (private)

compute-5-4

3

GP104 [GeForce GTX 1080] (rev a1)

gpu1080 (s)

compute-5-5

3

GP104 [GeForce GTX 1080] (rev a1)

gpu1080 (s)

compute-5-6

3

GP104 [GeForce GTX 1080] (rev a1)

gpu1080 (s)

compute-5-7

3

GP104 [GeForce GTX 1080] (rev a1)

gpu1080 (s)

compute-5-8

3

GP104 [GeForce GTX 1080] (rev a1)

gpu1080 (s)

compute-6-1

2

GK110GL [Tesla K20c] (rev a1)

free32i (s)

compute-6-3

1

GK110BGL [Tesla K40c] (rev a1)

its (private)

compute-7-12

2

GP104 [GeForce GTX 1080] (rev a1)

gpu2

compute-7-12

2

GK210GL [Tesla K80] (rev a1)

gpu2

(s) indicates that the node is privately owned jobs can be suspended.