Skip to content

Submitting GPU jobs

The HPC supports GPU jobs. The HPC includes an increasing number of compute nodes equipped with GPU hardware.

GPU resources on the HPC are available in certain access Slurm accounts (partitions):

GPU Model Slurm accounts
NVIDIA GeForce GTX 1080 Ti Owner accounts only
NVIDIA RTX A4000 genacc_q, backfill, backfill2, quicktest
NVIDIA H100 Tensor Core GPU Limited access for DSC pilot program

In addition, if your department, lab, or group has purchased GPU resources, they will be available on your owner-based Slurm account.

Submitting GPU Jobs#

If you wish to submit a job to node(s) that have GPUs, simply add the following line to your submit script:

#SBATCH --gres:gpu=[1-4] # <-- Choose a value between 1 and 4 cards

Nodes contain two to four GPU cards. Specify the number of GPU cards per node you wish to use after the --gpus directive. For example, if your job requires four GPU cards, simply specify 4:

#SBATCH --gres:gpu=4  # This job will reserve four GPU cards in a single node.

Full Example Submit Script#

The following HPC job will run on a GPU node and simply print information about the available GPU cards:

#!/bin/bash

#SBATCH --job-name="gpu_test"
#SBATCH --ntasks=1
#SBATCH --mail-type="ALL"
#SBATCH -t 1:00

# Here is the magic line to ensure we're running on a node with GPUs
#SBATCH --gres:gpu=1

# If your owner-based Slurm account has access to GPU nodes, you can use that. 
# For general access users, GPU jobs will run only on the list of Slurm accounts indiciated above.
#SBATCH -A genacc_q

# Not strictly necessary for this example, but most
# folks will want to load the CUDA module for GPU jobs
module load cuda

# Print out GPU information
/usr/bin/nvidia-smi -L

Your job output should look something like this:

GPU 0: NVIDIA Graphics Device (UUID: GPU-96cbe295-a053-3347-090d-b0adbb013646)
GPU 1: NVIDIA Graphics Device (UUID: GPU-62f15a0a-9c64-6bc4-4a88-f0cdea9a09c1)

For more information and examples, refer to our CUDA software documentation


Last update: May 30, 2023