Skip to main content

Colorizing Black and White Images with Weights & Biases

How can we add realistic color to black & white images? Explore the effect of up-convolutions, weight decay, and deeper architectures.
Created on December 5|Last edited on September 28
This report is a saved snapshot of Boris' research. He's published this example so you can see how to use W&B to visualize training and keep track of your work. Feel free to add a visualization, click on graphs and data, and play with features. Your edits won't overwrite his work.

Project Description

The goal is to colorize black and white pictures of flowers in a realistic way.
Join the collaborative development of models in our Colorizer Benchmark →

In the graphs below I visualized a few of the most interesting runs.
  • Green: baseline made of 5 layers
  • Blue: baseline to which the up-sampling layers are replaced with up-convolutions
  • Orange: 6 layers architecture
  • Red: 6 layers architecture with weight decay


Run set 1
4

Surprisingly, replacing the simple up-sampling layers with up-convolutions did not affect the accuracy while increasing model size by about 35%!
It means that the filters do not learn anything more interesting than rudimentary interpolations.


Run set 1
2

I tried to go as deep as possible (hoping it would lead to better final accuracy).
The 7-layer architecture quickly shows some over-fitting. I decided to settle with 6 layers.
Note: the 7-layer architecture starts at a lower loss because the training actually started a few epochs before this log.


Run set 1
3

Using weight decay made the training much slower.
After running tests at different weight decay contribution factors, I decided I didn't have enough time/resources for using it.


Run set 1
4

Iterate on AI agents and models faster. Try Weights & Biases today.