How To Check If PyTorch Is Using The GPU
In this tutorial, we walk you through how to check if PyTorch is using your GPU.
Created on August 14|Last edited on March 24
Comment
In this tutorial we will look at some of the ways to check whether PyTorch is using your GPU.
So ...
How do we check if PyTorch is using the GPU?
Method One: nvidia-smi
One of the easiest way to detect the presence of GPU is to use nvidia-smi command.
The NVIDIA System Management Interface (nvidia-smi) is a command line utility, intended to aid in the management and monitoring of NVIDIA GPU devices. You can read more about it here.
In Google Colab, which provides a host of free GPU chips, one can easily know the GPU device name and the appropriate privileges.

Fig 1: Result of using nvidia-smi
Method Two: Manual Check
In PyTorch, the torch.cuda package has additional support for CUDA tensor types, that implement the same function as CPU tensors, but they utilize GPUs for computation.
Let's walk you through some easy checks.
# imports are always neededimport torch# get index of currently selected devicetorch.cuda.current_device() # returns 0 in my case# get number of GPUs availabletorch.cuda.device_count() # returns 1 in my case# get the name of the devicetorch.cuda.get_device_name(0) # good old Tesla K80
The code snippet shown below is a handy way to get some information about the GPU.
# setting device on GPU if available, else CPUdevice = torch.device('cuda' if torch.cuda.is_available() else 'cpu')print('Using device:', device)print()#Additional Info when using cudaif device.type == 'cuda':print(torch.cuda.get_device_name(0))print('Memory Usage:')print('Allocated:', round(torch.cuda.memory_allocated(0)/1024**3,1), 'GB')print('Cached: ', round(torch.cuda.memory_cached(0)/1024**3,1), 'GB')
Monitoring You GPU Metrics
Now that you have access to your GPU, you are likely wondering what the easiest way to monitor your GPU metrics is.
And here are some other posts you might find interesting.
Using GPUs With Keras: A Tutorial With Code
This tutorial covers how to use GPUs for your deep learning models with Keras, from checking GPU availability right through to logging and monitoring usage.
How to Prevent TensorFlow From Fully Allocating GPU Memory
In this report, we see how to prevent a common TensorFlow performance issue
How To Use GPU with PyTorch
A short tutorial on using GPUs for your deep learning models with PyTorch, from checking availability to visualizing usable.
How to save and load models in PyTorch
This article is a machine learning tutorial on how to save and load your models in PyTorch using Weights & Biases for version control.
Add a comment
Iterate on AI agents and models faster. Try Weights & Biases today.