Weights & Biases experiment tracking is integrated in fastai with the WandbCallback
.
Use Fastai + W&B to:
First install wandb and login.
pip install wandb
wandb login
Next add the callback to your learner
or call to fit
methods:
import wandb
from fastai.callback.wandb import *
# start logging a wandb run
wandb.init(project='my_project')
# To log only during one training phase
learn.fit(..., cbs=WandbCallback())
# To log continuously for all training phases
learn = learner(..., cbs=WandbCallback())
You can test it with your own project or try our code:
WandbCallback
accepts the following arguments:
log
: "gradients" (default), "parameters", "all" or None. Losses & metrics are always logged.log_preds
(bool): whether we want to log prediction samples (default to True).log_model
(bool): whether we want to log our model (default to True). This also requires SaveModelCallback
.log_dataset
(bool or str):
False
(default)True
will log folder referenced by learn.dls.path
.dataset_name
: name of logged dataset (default to folder name).valid_dl
: DataLoaders
containing items used for prediction samples (default to random items from learn.dls.valid
).n_preds
: number of logged predictions (default to 36).seed
: used for defining random samples.The callback automatically logs metrics, hyperparameters like learning rate and momentum, histograms of gradients and parameters and prediction samples.# Fastai with the WandbCallback
Weights & Biases experiment tracking is integrated into fastai with the WandbCallback
.
Quick Links
Use Fastai + W&B to:
First, install wandb and log in.
pip install wandb
wandb login
Next, add the callback to your learner
or call to fit
methods:
import wandb
from fastai.callback.wandb import *
# start logging a wandb run
wandb.init(project='my_project')
# To log only during one training phase
learn.fit(..., cbs=WandbCallback())
# To log continuously for all training phases
learn = learner(..., cbs=WandbCallback())
You can test it with your project or try our code:
WandbCallback
accepts the following arguments:
log
: "gradients" (default), "parameters", "all" or None. Losses & metrics are always logged.log_preds
(bool): whether we want to log prediction samples (default to True).log_model
(bool): whether we want to log our model (default to True). This also requires SaveModelCallback
.log_dataset
(bool or str):
False
(default)True
will log folder referenced by learn.dls.path
.dataset_name
: name of logged dataset (default to folder name).valid_dl
: DataLoaders
containing items used for prediction samples (default to random items from learn.dls.valid
).n_preds
: number of logged predictions (default to 36).seed
: used for defining random samples.The callback automatically logs metrics, hyperparameters like learning rate and momentum, histograms of gradients and parameters, and prediction samples.
GPU and CPU resources are automatically logged as well. This allows you to see any compute bottlenecks during the course of training, and identify opportunities for optimization.
Use SaveModelCallback
to capture the model topology. The trained model will automatically be uploaded to your project at the end of the run. On your run page, you'll be able to see a summary table like this:
By default, sample predictions are logged for you to see live as the model is training. Turn this off with log_preds=False
.
Graphs can be customized:
You can even plot the evolution of prediction over time.
You can set which dependent and independent variables to display and quickly explore sample predictions.
The parameters you used in your functions will be automatically saved.
So if you run your notebook a few times trying different parameters (batch size, number of epochs, learning rate, GradientAccumulation
callback…), then open your project page, you will see that more than 100 parameters have automatically been logged for you from all the fastai functions you used.
Press the magic wand on the top right of your runs summary table, reorganize and hide columns you want, and you get a nice comparative summary.
You can also easily create graphs to compare your different runs.
WandbCallback
uses artifacts to keep track of models and datasets.
To track a model:
log_model=True
and SaveModelCallback
log_model(path, name=None, metadata={})
To track datasets:
log_dataset=True
to track the folder defined by learn.dls.path
log_dataset="my_path"
to explicitly define a folder to trackdataset_name
if you want a custom name, otherwise it is set to the folder namelog_dataset(path, name=None, metadata={})
Finally, you can pull data from all these runs to create awesome interactive reports like this one, where your results are fully traceable!
Refer to the W&B documentation for more details.