test copy
A short tutorial on using GPUs for your deep learning models with PyTorch, from checking availability to visualizing usable.
Created on August 5|Last edited on August 5
Comment
Sections
Introduction
In this report, we will walk through ways to use and have more control over your GPU.
We'll use Weights and Biases that lets us automatically log all our GPU and CPU utilization metrics. This makes it easy to monitor the compute resource usage as we train a plethora of models.
Check GPU Availability
The easiest way to check if you have access to GPUs is to call torch.cuda.is_available(). If it returns True, it means the system has the Nvidia driver correctly installed.
>>> import torch>>> torch.cuda.is_available()
Use GPU - Gotchas
- By default, the tensors are generated on the CPU. Even the model is initialized on the CPU. Thus one has to manually ensure that the operations are done using GPU.>>> X_train = torch.FloatTensor([0., 1., 2.])>>> X_train.is_cudaFalse
- PyTorch provides a simple to use API to transfer the tensor generated on CPU to GPU. Luckily the new tensors are generated on the same device as the parent tensor.>>> X_train = X_train.to(device)>>> X_train.is_cudaTrue
- The same logic applies to the model.model = MyModel(args)model.to(device)
- Thus data and the model need to be transferred to the GPU. Well, what's device?
- It's a common PyTorch practice to initialize a variable, usually named device that will hold the device we’re training on (CPU or GPU).device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")print(device)
Add a comment