YOLOv5 fine-tuning on Duckietown Object Detection dataset
A summary of fine-tuning YOLOv5 on Duckietown Object Detection dataset
Created on March 23|Last edited on March 23
Comment
See Exploratory Data Analysis report
Fine-Tuning Summary
Overall the following insights indicate a good result of fine-tuning YOLOv5 on the duckietown object detection dataset.
One potential for performance improvement derived from the insights is to improve the ground truth labelling for duckie class as the model seems to also detect well duckies that were mistakenly not labelled in the original dataset (presumably because they were further away / too small).
💡
The following grid summarizes the model fine-tuning into 6 insights:
- Losses (training/validation) as well as 4 tracked metrics (precision/recall/mAP_0.5/mAP_0.5:0.95) table
- Confusion matrix for predicted classes
- F1-Confidence curve
- Precision-Recall curve
- Precision-Confidence curve
- Recall-Confidence curve
Run: playful-plasma-1
1
A closer inspection of the class-specific metrics extracted from the logs shows
Class Images Instances P R mAP50 mAP50-95:
all 176 479 0.91 0.923 0.961 0.617
cone 176 30 0.93 0.886 0.975 0.549
duckie 176 259 0.838 0.92 0.925 0.595
duckiebot 176 190 0.96 0.963 0.985 0.706
💡
that the model had the hardest time with duckie class - the relatively lower precision for this class can be explained by the increased false positive rate for that class - a further inspection of images confirms however that some of these "false positive" predictions are indeed just positive - the detected duckies are correct but were simply not labelled as such in the ground truth as they were probably considered too far away / too small.
Run: playful-plasma-1
1
Finally, some labels analysis confirming the need to stratify the dataset by underrepresented classes.
Run: playful-plasma-1
1
Add a comment