Skip to main content

Freezing Layers in YOLOv5

Transfer learning is a useful way to quickly retrain a model on new data without having to retrain the entire network. Instead, part of the initial weights are frozen in place, and the rest of the weights are used to compute loss and are updated by the optimizer. This requires less resources than normal training and allows for faster training times, though it may also results in reductions to final trained accuracy.
Created on November 6|Last edited on November 6



meta
["--batch","48","--weights","yolov5m.pt","--data","voc.yaml","--epochs","50","--cache","--img","512","--nosave","--hyp","hyp.finetune.yaml","--name","yolov5m_default","--logdir","../drive/My Drive/cloud/runs/voc"]
["--batch","48","--weights","yolov5m.pt","--data","voc.yaml","--epochs","50","--cache","--img","512","--nosave","--hyp","hyp.finetune.yaml","--name","yolov5m_freeze_all","--logdir","../drive/My Drive/cloud/runs/voc"]
["--batch","48","--weights","yolov5m.pt","--data","voc.yaml","--epochs","50","--cache","--img","512","--nosave","--hyp","hyp.finetune.yaml","--name","yolov5m_freeze_backbone","--logdir","../drive/My Drive/cloud/runs/voc"]
2h 6m 46s
1h 40m 30s
1h 37m 10s
config
yolov5m_default
yolov5m_freeze_all
yolov5m_freeze_backbone
summary
metrics
0.89239
0.85068
0.89029
0.68237
0.63278
0.67555
0.59288
0.51894
0.55104
0.93398
0.92324
0.94056
train
0.0053985
0.0089604
0.0068453
0.019863
0.024269
0.021283
0.010136
0.012657
0.010932
val
0.0013354
0.0023414
0.001434
0.015098
0.016004
0.01506
0.0055725
0.006643
0.0057281
7596
6020
5824
Run set
3



Run set
3



Run set
3



Run set
3



Run set
3