Paper Reading Group: EfficientNetV2: Smaller Models and Faster Training

The paper reading groups are supported by experiments, blogs & code implementation!. Made by Andrea Pessl using Weights & Biases
Andrea Pessl
After 3 Paper Reading Groups on computer vision, we were excited for Aman Arora to host the 4th paper of W&B's PRG series.

Register here for upcoming sessions!

In this paper, EfficientNetV2 Smaller Models and Faster Training [paper, summary blog] the authors seek to establish a general recipe for training deep ResNets without normalization layers which achieve test accuracies competitive with state of the art! Batch Normalization (BatchNorm) has been key in advancing deep learning research in computer vision, but, in the past few years, a new line of research has emerged that seeks to eliminate layers which normalize activations entirely.

All recordings can be rewatched in this YouTube playlist!