Supercharging Ultralytics with Weights & Biases
A guide on using Weights & Biases with Ultralytics workflows for computer vision models
Created on July 20|Last edited on September 10
Comment
Introduction
Ultralytics is the home for cutting-edge, state-of-the-art computer vision models for tasks like image classification, object detection, image segmentation, and pose estimation. Not only it hosts YOLOv8, the latest iteration in the YOLO series of real-time object detection models, but other powerful computer vision models such as SAM (Segment Anything Model), RT-DETR, YOLO-NAS, etc. Besides providing implementations of these models, Ultralytics also provides us with out-of-the-box workflows for training, fine-tuning, and applying these models using an easy-to-use API.
Weights & Biases is a developer-first MLOps platform that when integrated with an Ultralytics workflow, enables us to easily manage our experiments, model checkpoints, and visualize the results of our experiments in an insightful and intuitive manner.
Object Detection
2
2
2
2
🐝 Using Weights & Biases with Ultralytics
Using Weights & Biases with our Ultralytics workflow is pretty easy. We just need to:
👉 Install Ultralytics and Weights & Biases using
!pip install --upgrade ultralytics wandb
👉 Once all the dependencies are installed, you can start using Weights & Biases with your Ultralytics workflow:
from ultralytics import YOLOfrom wandb.integration.ultralytics import add_wandb_callback# Load a pretrained YOLO model (recommended for training)model = YOLO(f"yolov8n.pt")# Add the Weights & Biases callback to the model.# This will work for training, evaluation and predictionadd_wandb_callback(model, enable_model_checkpointing=True)# Train the model using the 'coco128.yaml' dataset for 5 epochs# Results after evaluating the validation batch would be logged# to a W&B table at the end of each epochmodel.train(project="ultralytics", data="coco8-pose.yaml", epochs=5, imgsz=640)# Evaluate the model's performance on the validation set.# The validation results are logged to a W&B table.model.val()# Perform prediction on images using the model.# The prediction results are logged to a W&B table.model(["image1.png", "image2.png", "image3.png", "image4.png"...])
👉 By setting enable_model_checkpointing=True in the function add_wandb_callback, model checkpoints are logged to W&B as model artifacts.
👉 If your Ultralytics programs involve training and validation, you don't need to initialize a W&B run, it's automatically done before training. Note that you can specify the name of the W&B project in model.train(). If your program is performing only prediction, you share needed to initialize a W&B run by calling wandb.init().
Visualizing Predictions
Object Detection
16
5
5
6
Visualizing Training and Validation Results for Object Detection
In the following panel, we see the results of training different YOLO model variants supported by Ultralytics (including YOLOv3, YOLOv5, and YOLOv8 models) on the COCO128 subset of the MSCOCO dataset supported out-of-the-box in Ultralytics.
Training and validating object detection models from Ultralytics using Weights & Biases 🐝
15
Visualizing Training and Validation Results for Image Segmentation
In the following panels, we see the results of training different models of the YOLOv8 family of models for instance segmentation on the COCO128-seg subset of the MSCOCO dataset.
Training and validating image segmentation models from Ultralytics using Weights & Biases 🐝
5
Visualizing Training and Validation Results for Pose Estimation
In the following panels, we see the results of training different models of the YOLOv8 family of models for human pose estimation on the COCO8-pose subset of the MSCOCO dataset.
Training and validating pose estimation models from Ultralytics using Weights & Biases 🐝
6
Visualizing Training and Validation Results for Image Classification
In the following panels, we see the results of training different models of the YOLOv8 family of models for image classification on the imagenette160 subset of the Imagenette dataset supported for out-of-the-box usage as part of Ultralytics.
Training and validating pose estimation models from Ultralytics using Weights & Biases 🐝
6
Conclusion
In this report, we discuss how we can use Weights & Biases along with computer vision workflows involving Ultralytics models for managing our experiments, visualizing and exploring our results, managing model checkpoints, and much more. For more such resources on using Weights & Biases for computer vision tasks, you can check out the following reports...
Training Semantic Segmentation Models for Autonomous Vehicles (A Step-by-Step Guide)
A short tutorial on leveraging Weights & Biases to train a semantic segmentation model for autonomous vehicles.
Object Detection for Autonomous Vehicles (A Step-by-Step Guide)
Digging into object detection and perception for autonomous vehicles using YOLOv5 and Weights & Biases
Low-Light Image Enhancement: Lighting up Images in the Deep Learning Era
In this article, we explore some deep learning techniques for low-light image enhancement, so that you can enhance images taken under sub-optimal conditions.
How To Use Weights & Biases With MMDetection
In this article, we'll train an object detection model using MMDetection and learn how to use MMDetWandbHook to log metrics, visualize predictions, and more.
YOLO-NAS: SoTA Foundation Model for Object Detection
YOLO-NAS is a new foundation model for object detection that sets a new standard for state-of-the-art object detection.
XLA Compatibility of Vision Models in Keras
A set of comprehensive benchmarks around XLA compatibility of computer vision models implemented in Keras.
Add a comment
Iterate on AI agents and models faster. Try Weights & Biases today.