Skip to main content

Instrumenting Weights & Biases in Your Training Script with fastai

Learn to use instrument Weights & Biases in your training script using the fastai library. This video is a sampling from the free MLOps certification course from Weights & Biases!
Created on December 22|Last edited on December 28
In this video from our MLOps course, we'll demonstrate how easy it is to instrument Weights & Biases in your training script using the fastai library. With just a few lines of code, you can log your experiments, save your models, and monitor your predictions within the Weights & Biases platform.
This not only allows you to track your progress and understand how your model is performing, but it also makes it easy to share your results with your team and reproduce your experiments. And while we use fastai in this video, the process for instrumenting Weights & Biases is similar for other popular ML frameworks. If you're looking to improve your MLOps workflow and tracking, be sure to watch this video and learn how to instrument Weights & Biases in your training script!





Transcript (from Whisper)

In the integration docs we can see many popular frameworks such as Keras or PyTorch, you can see repos, such as HuggingFace or spaCy and popular tools.
So let's go back to the frameworks and pick fastai.
For most of these frameworks, Weights & Biases integration is pretty straightforward. In this case we need to have W&B installed, we need to log in, then we need to import the W&B callback, we need to init our run and add the callback to the fit function or to the learner object.
There are a bunch of arguments here.
For example, we can log threads, we can log model to Weights & Biases. So let's do this now in our training script. First thing that we need to do is to import the W&B callback. We will set C to ensure reproducibility.
We need to make sure that we define and store hyperparameters. This will be important when we start running multiple experiments. We'll store our hyperparameters in the train config and pass this config into W&B run.
We'll initialize our W&B run and this time, we'll be training a model so the job type for W&B run is training. We will use artifacts to track the data lineage of our models. At this stage we won't be using our test set, we'll monitor the results of our training runs on the validation set and come back to the test set in the evaluation stage later.
We are using FastAI Data Block API to transform our data into the right shape expected by our model. We will use W&B config to set our hyperparameters such as batch size or image size. We'll pass these parameters to create our data loaders.
It's very important to pick and monitor the right metrics during our training runs. We'll monitor IOU, which stands for intersection over union. We will track IOU for each of our target classes and demean across all of the classes.
We'll talk more about these metrics in lesson 3 when we talk about model evaluation.
This is our baseline model so we will use the classic unit architecture with a pre-trained resonating backbone and now we need to add the relevant callbacks.
Let's start with the safe model callback. This will help us save the best model based on the metrics that we choose, so let's choose mean IOU.
W&B callback will lock our experiment to its empires. We will lock our predictions manually in Weights & Biases table below, so let's set this to false and we also want to have our model locked to waiting biases, so let's set this one to true. Our fit our model will again pass the right parameters from our config.
Later on when we evaluate the model we will save our predictions and log them in a Weights & Biases table so that we can evaluate them and look at them in the dashboard. We'll also look our final metrics into waiting biases summary.
We'll finish our run, so let's run this code now and we'll take a look at the results in our dashboard.
Iterate on AI agents and models faster. Try Weights & Biases today.