Fastai with the WandbCallback

Weights & Biases experiment tracking is integrated in fastai with the WandbCallback.

Quick Links

Use Fastai + W&B to:

Quickstart

First install wandb and login.

pip install wandb
wandb login

Next add the callback to your learner or call to fit methods:

import wandb
from fastai.callback.wandb import *

# start logging a wandb run
wandb.init(project='my_project')

# To log only during one training phase
learn.fit(..., cbs=WandbCallback())

# To log continuously for all training phases
learn = learner(..., cbs=WandbCallback())

You can test it with your own project or try our code:

See colab notebook →

Arguments

WandbCallback accepts the following arguments:

Single Run

The callback automatically logs metrics, hyperparameters like learning rate and momentum, histograms of gradients and parameters and prediction samples.# Fastai with the WandbCallback

Weights & Biases experiment tracking is integrated into fastai with the WandbCallback.

Quick Links

Use Fastai + W&B to:

Quickstart

First, install wandb and log in.

pip install wandb
wandb login

Next, add the callback to your learner or call to fit methods:

import wandb
from fastai.callback.wandb import *

# start logging a wandb run
wandb.init(project='my_project')

# To log only during one training phase
learn.fit(..., cbs=WandbCallback())

# To log continuously for all training phases
learn = learner(..., cbs=WandbCallback())

You can test it with your project or try our code:

See colab notebook →

Arguments

WandbCallback accepts the following arguments:

Single Run

The callback automatically logs metrics, hyperparameters like learning rate and momentum, histograms of gradients and parameters, and prediction samples.

Single run

GPU and CPU resources are automatically logged as well. This allows you to see any compute bottlenecks during the course of training, and identify opportunities for optimization.

Single run

Use SaveModelCallback to capture the model topology. The trained model will automatically be uploaded to your project at the end of the run. On your run page, you'll be able to see a summary table like this:

image.png

Log sample predictions

By default, sample predictions are logged for you to see live as the model is training. Turn this off with log_preds=False.

Semantic Segmentation

Graphs can be customized:

semantic_segmentation.png

You can even plot the evolution of prediction over time.

semantic_time.png

Semantic Segmentation

Tabular Data

You can set which dependent and independent variables to display and quickly explore sample predictions.

Section 7

Comparing Runs

The parameters you used in your functions will be automatically saved.

So if you run your notebook a few times trying different parameters (batch size, number of epochs, learning rate, GradientAccumulation callback…), then open your project page, you will see that more than 100 parameters have automatically been logged for you from all the fastai functions you used.

Press the magic wand on the top right of your runs summary table, reorganize and hide columns you want, and you get a nice comparative summary.

image.png

You can also easily create graphs to compare your different runs.

Comparing runs

Tracking Models and Datasets

WandbCallback uses artifacts to keep track of models and datasets.

image.png

To track a model:

To track datasets:

Finally, you can pull data from all these runs to create awesome interactive reports like this one, where your results are fully traceable!

Refer to the W&B documentation for more details.