Skip to main content

Paper Reading Group: MLP-Mixer w/ special guest Dr. Habib

The paper reading groups are supported by experiments, blogs & code implementation!
Created on July 13|Last edited on August 2
Recently, a new kind of architecture - MLP-Mixer: An all-MLP Architecture for Vision (Tolstikhin et al., 2021) - was proposed which claims to have competitive performance with SOTA models on ImageNet without using convolutions or attention.
Here are some questions we will deal with at our upcoming Paper Reading Group:
  • Is this MLP mixer really "conv-free"?
  • Are there other ways of implementing the mixer layer?
  • What does the overall architecture look like?
  • What are the main contributions from the paper?
Together with special guest, Dr. Habib we'll be looking into this and more!

Register & join us for our live Paper Reading Group on July 13!




🎥 Find all previous recordings on our Paper Reading Group playlist.