Visualize, Track, and Compare fastai Models With Weights & Biases
In this article, we take a look at how to visualize, compare, and iterate on fastai models with Weights & Biases
Created on April 17|Last edited on October 10
Comment
fastai With the WandbCallback

Quick Links
Use fastai and Weights & Biases to:
- Log and compare runs and hyperparameters
- Keep track of code, models and datasets
- Automatically log prediction samples to visualize during training
- Make custom graphs and reports with data from your runs
- Launch and scale hyperparameter search on your own compute, orchestrated by W&B
Quickstart
First, install Weights & Biases and login.
pip install wandbwandb login
Next, add the callback to your learner or call to fit methods:
import wandbfrom fastai.callback.wandb import *# start logging a wandb runwandb.init(project='my_project')# To log only during one training phaselearn.fit(..., cbs=WandbCallback())# To log continuously for all training phaseslearn = learner(..., cbs=WandbCallback())
You can test it with your own project or try our code:
Arguments
WandbCallback accepts the following arguments:
- log: "gradients" (default), "parameters", "all" or None. Losses & metrics are always logged.
- log_preds (bool): whether we want to log prediction samples (default to True).
- log_model (bool): whether we want to log our model (default to True). This also requires SaveModelCallback.
- log_dataset (bool or str):
- False (default)
- True will log folder referenced by learn.dls.path.
- a path can be defined explicitly to reference which folder to log.
- Note: subfolder "models" is always ignored.
- dataset_name: name of logged dataset (default to folder name).
- valid_dl: DataLoaders containing items used for prediction samples (default to random items from learn.dls.valid).
- n_preds: number of logged predictions (default to 36).
- seed: used for defining random samples.
Single Run
The callback automatically logs metrics, hyperparameters like learning rate and momentum, histograms of gradients and parameters and prediction samples.
fastai With the WandbCallback

Quick Links
Use fastai and Weights & Biases to:
- Log and compare runs and hyperparameters
- Keep track of code, models, and datasets
- Automatically log prediction samples to visualize during training
- Make custom graphs and reports with data from your runs
- Launch and scale hyperparameter search on your own compute, orchestrated by W&B
- Collaborate transparently, with traceability and reproducibility
Quickstart
First, install Weights & Biases and log in.
pip install wandbwandb login
Next, add the callback to your learner or call to fit methods:
import wandbfrom fastai.callback.wandb import *# start logging a wandb runwandb.init(project='my_project')# To log only during one training phaselearn.fit(..., cbs=WandbCallback())# To log continuously for all training phaseslearn = learner(..., cbs=WandbCallback())
You can test it with your project or try our code:
Arguments
WandbCallback accepts the following arguments:
- log: "gradients" (default), "parameters", "all" or None. Losses & metrics are always logged.
- log_preds (bool): whether we want to log prediction samples (default to True).
- log_model (bool): whether we want to log our model (default to True). This also requires SaveModelCallback.
- log_dataset (bool or str):
- False (default)
- True will log folder referenced by learn.dls.path.
- a path can be defined explicitly to reference which folder to log.
- Note: subfolder "models" is always ignored.
- dataset_name: name of logged dataset (default to folder name).
- valid_dl: DataLoaders containing items used for prediction samples (default to random items from learn.dls.valid).
- n_preds: number of logged predictions (default to 36).
- seed: used for defining random samples.
Single Run
The callback automatically logs metrics, hyperparameters like learning rate and momentum, histograms of gradients and parameters, and prediction samples.
Run set
1
GPU and CPU resources are automatically logged as well. This allows you to see any compute bottlenecks during the course of training, and identify opportunities for optimization.
Run set
1
Use SaveModelCallback to capture the model topology. The trained model will automatically be uploaded to your project at the end of the run. On your run page, you'll be able to see a summary table like this:

Log Sample Predictions
By default, sample predictions are logged for you to see live as the model is training. Turn this off with log_preds=False.
Semantic Segmentation
- select which classes to display
- add optionally input underneath the mask
- set up the opacity of mask and image

You can even plot the evolution of prediction over time.

Run set
1
Tabular Data
You can set which dependent and independent variables to display and quickly explore sample predictions.
Run set
1
Comparing Runs
The parameters you used in your functions will be automatically saved.
So if you run your notebook a few times trying different parameters (batch size, number of epochs, learning rate, GradientAccumulation callback…), then open your project page, you will see that more than 100 parameters have automatically been logged for you from all the fastai functions you used.
Press the magic wand on the top right of your runs summary table, reorganize and hide columns you want, and you get a nice comparative summary.

You can also easily create graphs to compare your different runs.
Run set
9
Tracking Models and Datasets

To track a model:
- use log_model=True and SaveModelCallback
- for custom scenarios, use function log_model(path, name=None, metadata={})
To track datasets:
- use log_dataset=True to track the folder defined by learn.dls.path
- use log_dataset="my_path" to explicitly define a folder to track
- optionally use dataset_name if you want a custom name, other,wise it is set to the folder name
- for custom scenario, use function log_dataset(path, name=None, metadata={})
- Note: the subfolder "models" is always ignored
Finally, you can pull data from all these runs to create awesome interactive reports like this one, where your results are fully traceable!
Add a comment
Tags: Beginner, Computer Vision, Semantic Segmentation, fastai, Tutorial, W&B Meta, Artifacts, Panels, Plots, Slider
Iterate on AI agents and models faster. Try Weights & Biases today.