Reproducibility Challenge

Welcome to the 2021 Spring Edition of the ML Reproducibility Challenge!

Track Performance
Keep track of your experiments, predictions, hyperparameters and validate paper’s results.
Seamless Sharing
Share your findings with dynamic graphs, images, audio notes and runsets in flexible formats.
Easily invite collaborators to edit and comment.
Supporting the RC participants
At W&B, we strongly believe that research should be reproducible and accessible, so we’re excited to support people participating in the PaperswithCode Reproducibility Challenge. We believe this initiative is very important, and we’re happy to do what we can to support participants.

Specifically, we're looking to support efforts to make sure that every NeurIPS, ACL, CVPR, ECCV, ICLR, ICML, and EMLNLP paper is reproducible and that the claims made are verifiable by tracking the model performance and predictions in W&B.
Join us
If you'd like to help us achieve that, please join the #ml-reproducibility channel in our Slack community, where we coordinate these efforts.
Subsidizing compute costs
We realize that some of these papers are computationally expensive to reproduce, and we're happy to offer participants $500 per paper reproduced, as long as it meets these guidelines.
How to compete
Pick a Paper
Claim a paper published at the selected AI conferences.
Integrate W&B
Submit a Report
Compile your findings into a W&B report and submit to our Gallery.

RC 2020 Highlights

Check out some of your favorite submissions from Reproducibility Challenge 2020!

ECA-Net: Efficient Channel Attention
For Deep Convolutional Neural Networks, ML Reproducibility Challenge 2020
W&B: A Reproducibility (Research) Perspective
How Weights & Biases optimised my attempt for the ML Reproducibility Challenge 2020
Reformer Reproducibility community submission to the Reproducibility Challenge 2020
Rigging the Lottery
Making all tickets winners