Object Detection with PaddleDetection and Weights & Biases
In this article, we'll learn how to use PaddleDetection to train a YOLOX model from scratch, logging all of your metrics and model checkpoints with Weights & Biases.
Created on May 24|Last edited on February 2
Comment
Run set
16
PaddleDetection is an end-to-end object detection development kit based on PaddlePaddle. It implements varied mainstream object detection, instance segmentation, tracking, and keypoint detection algorithms in modular design with configurable modules such as network components, data augmentations, and losses.
PaddleDetection now comes with a built-in W&B integration that logs all your training and validation metrics, your model checkpoints, and their corresponding metadata.
If you would like to follow along this tutorial with running code, check out the colab link below!
💡
In this tutorial, we'll train a YOLOX model on a subset of the COCO2017 dataset, which contains 1000 images in the training and 250 in the validation set.
You can activate the W&B logger for your training jobs in two ways:
- Command Line: The arguments to the W&B logger must be proceeded by -o, and each individual argument must contain the prefix "wandb-" and the `--use_wandb` flag should be used.
python tools/train -c config.yml --use_wandb -o wandb-project=MyDetector wandb-entity=MyTeam wandb-save_dir=./logs
- YAML File: Add the arguments to the configuration YAML file under the wandb header like this
wandb:project: MyProjectentity: MyTeamsave_dir: ./logs
Setting Things Up
Installing the W&B SDK
We start by installing the wandb SDK, followed by logging into our W&B account:
pip install wandbwandb login
Installing PaddleDetection
Next, we'll clone the PaddleDetection library and install the package from the source since it also contains a training script along with configurations for a wide variety of pre-implemented models!
pip install paddlepaddle-gpu pyclipper attrdict gdown -qqqgit clone https://github.com/PaddlePaddle/PaddleDetectioncd PaddleDetectionpip install -e .
How to Download the COCO Dataset
The dataset has been logged as a W&B Artifact for easier downloading. It contains 1000 images for training and 250 for validation with corresponding annotations, which we will now use for training our object detection model.
artifact = wandb.Api().artifact("manan-goel/PaddleDetectionYOLOX/COCOSubset:latest")path = artifact.download(root='./dataset/coco')
Training
Training the Model
We now use the training script in the PaddleDetection library to train the YOLOX model, and the config above adds W&B logging during training. We also add the --eval flag to have an evaluation step every 5 epochs.
Using the CLI
python tools/train.py -c configs/yolox/yolox_nano_300e_coco.yml --use_wandb -o wandb-project=PaddleDetectionYOLOX --eval
Using the Configuration YAML File
In order to automatically start experiment tracking with W&B for your training pipelines, the following snippet can be added to the configuration YAML file, which is then used as input to the training script.
wandb:project: PaddleDetectionYOLOX
Visualization
Tracking the Training and Validation Metrics
During training, the metrics on the training and validation sets are logged to a W&B dashboard which looks something like this:
Run set
16
Tracking System Metrics
Run set
16
Downloading the Best Model From W&B
The model checkpoints are logged as W&B artifacts and can be downloaded for evaluation using the following snippet.
import wandbartifact = wandb.Api().artifact('manan-goel/PaddleDetectionYOLOX/model-26oqc38r:best', type='model')artifact_dir = artifact.download()
Testing on Images
The following cell runs the inference script in the PaddleDetection repository on all the images in the demo directory. It stores them in the infer_output directory using the YOLOX model pulled from W&B.
for i in $(ls demo/*.jpg)dopython tools/infer.py -c configs/yolox/yolox_nano_300e_coco.yml \--infer_img=$i \--output_dir=infer_output/ \--draw_threshold=0.5 \-o weights=./artifacts/model-26oqc38r:v1/modeldone
This goes through each jpg file in the demo directory and runs the inference script on them using the downloaded model, and stores the annotated images in the infer_output directory.
Bonus: Logging Annotated Images to Your W&B Dashboard
import globimport wandbwandb.init(project="PaddleDetectionYOLOX")wandb.use_artifact('manan-goel/PaddleDetectionYOLOX/model-26oqc38r:best')table = wandb.Table(columns=["Input Image", "Annotated Image"])inp_imgs = sorted(glob.glob("./demo/*.jpg"), key=lambda x: x.split("/")[-1])out_imgs = sorted(glob.glob("./infer_output/*.jpg"), key=lambda x: x.split("/")[-1])for inp in inp_imgs:for out in out_imgs:if out.split("/")[-1] != inp.split("/")[-1]:continuetable.add_data(wandb.Image(inp),wandb.Image(out))wandb.log({"Predictions": table})wandb.finish()
This above script will take the input images and the labeled images and log them to a W&B table which looks something like this:
Run set
16
Conclusion
This tutorial gives a quick run-through on how you can use W&B in conjunction with PaddleDetection to support all your object detection model development needs. Check out the Colab for a version of this report with executable code.
Related Work
YOLOv5 Object Detection on Windows (Step-By-Step Tutorial)
This tutorial guides you through installing and running YOLOv5 on Windows with PyTorch GPU support. Includes an easy-to-follow video and Google Colab.
Search and Rescue: Augmentation and Preprocessing on Drone-Based Water Rescue Images With YOLOv5
In this article, we look at achieving mAP scores of over 0.97 on large images with comparatively very small people, as used in drone-based water rescue.
Add a comment
Iterate on AI agents and models faster. Try Weights & Biases today.