Implementing Dropout in PyTorch: With Example

An example covering how to regularize your PyTorch model with Dropout, complete with code and interactive visualizations. Made by Lavanya Shukla using W&B
Lavanya Shukla

An Introduction to Dropout in PyTorch

In this report, we'll see an example of adding dropout to a PyTorch model and observe the effect dropout has on the model's performance by tracking our models in Weights & Biases.

What is Dropout?

Dropout is a machine learning technique where you remove (or "drop out") units in a neural net to simulate training large numbers of architectures simultaneously. Importantly, dropout can drastically reduce the chance of overfitting during training.

Run an example of dropout in PyTorch in this Colab →

An Example of Adding Dropout to a PyTorch Model

1. Add Dropout to a PyTorch Model

Adding dropout to your PyTorch models is very straightforward with the torch.nn.Dropout class, which takes in the dropout rate – the probability of a neuron being deactivated – as a parameter.
self.dropout = nn.Dropout(0.25)
We can apply dropout after any non-output layer.

2. Observe the Effect of Dropout on Model performance

To observe the effect of dropout, train a model to do image classification. We'll first train an unregularized network, followed by a network regularized through Dropout. The models are trained on the Cifar-10 dataset for 15 epochs each.

Complete example of adding dropout to a PyTorch model:

class Net(nn.Module): def __init__(self, input_shape=(3,32,32)): super(Net, self).__init__() self.conv1 = nn.Conv2d(3, 32, 3) self.conv2 = nn.Conv2d(32, 64, 3) self.conv3 = nn.Conv2d(64, 128, 3) self.pool = nn.MaxPool2d(2,2) n_size = self._get_conv_output(input_shape) self.fc1 = nn.Linear(n_size, 512) self.fc2 = nn.Linear(512, 10) # Define proportion or neurons to dropout self.dropout = nn.Dropout(0.25) def forward(self, x): x = self._forward_features(x) x = x.view(x.size(0), -1) x = self.dropout(x) x = F.relu(self.fc1(x)) # Apply dropout x = self.dropout(x) x = self.fc2(x) return x
By using wandb.log() in your training function, you can automatically track the performance of your model. See docs for full details.
def train(model, device, train_loader, optimizer, criterion, epoch, steps_per_epoch=20): # Log gradients and model parameters wandb.watch(model) # loop over the data iterator, and feed the inputs to the network and adjust the weights. for batch_idx, (data, target) in enumerate(train_loader, start=0): # ... acc = round((train_correct / train_total) * 100, 2) # Log metrics to visualize performance wandb.log({'Train Loss': train_loss/train_total, 'Train Accuracy': acc})

Impact of Using Dropout in PyTorch

And that concludes this short tutorial on using dropout in your PyTorch models.

Try our dropout colab yourself →

Weights & Biases

Weights & Biases helps you keep track of your machine learning experiments. Use our tool to log hyperparameters and output metrics from your runs, then visualize and compare results and quickly share findings with your colleagues.
Get started in 5 minutes.