Do classifiers generalize poorly on unbalanced and long-tailed distributions?
Created on March 30|Last edited on March 30
Comment
Many real-world datasets are heavily class imbalanced and contain only a few examples for most of the classes. The question that arises in such situations is whether the neural nets are able to transfer invariances from well-represented classes to less represented and smaller ones?
In a new paper titled "Do Deep Networks Transfer Invariances Across Classes?", the authors show that in general, deep networks do *not* transfer invariances! Infact the invariance to class-agnostic transformations is still heavily dependent on class size, with the networks being much less invariant in smaller classes. This result holds even when using data balancing techniques, and suggests poor invariance transfer across classes.
Author's approach
The authors show how a generative approach for learning the nuisance transformations can help transfer invariances across classes and improve performance on a set of imbalanced image classification benchmarks. More specifically, they explore Generative Invariance Transfer (GIT) as a method for directly learning a generative model of a dataset’s nuisance transformations.

Resources
Add a comment
Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.