Weights & Biases: Your Automated Logging Companion for Reproducible Research
How W&B helps students, researchers, teachers, and academics of all stripes log research for easy collaboration, sharing, and reproducibility
Created on November 22|Last edited on January 4
Comment
Introduction
In this report, we'll discuss the challenges around reproducing your research and how W&B can help solve those. Let's jump right in:
Why Reproducing Research is Difficult–and Important
There's a paradox at the heart of machine learning. The amazing successes we've seen in fields like computer vision and natural language processing have been driven by a powerful technique called deep learning which allows as researchers to side-step many of the issues posed by traditional machine learning models or shallow, single-layer neural networks. But the same technique that has allowed us to build such amazing models is also the reason why deep learning is so hard to reproduce.
Why is that exactly? Essentially, deep learning models are difficult to reproduce because they are often trained on large amounts of data, using complicated algorithms, and on top of that they often require a lot of computational power. This combination of factors makes it very hard to exactly reproduce the training process, which in turn makes it hard to exactly reproduce the results.
The problem of reproducibility is compounded by the fact that deep learning models are often trained on proprietary data, which makes it hard to get access to the data in the first place. Even when data is available, it can be very hard to understand what's going on inside a deep learning model. This is because the models are often opaque, meaning that even the researchers who built them cannot always explain how they work.
All of this means that even when deep learning has achieved some amazing results, it's very hard to build on these results or to use them to solve new problems.
This is where Weights & Biases comes in.
Weights & Biases is free for academics for managing the end-to-end machine learning lifecycle–always. Weights & Biases provides a number of features that help with reproducibility, collaboration, running experiments, and reporting on findings.
Here's how it can help make your research more successful–and more reproducible:
Research groups
Perhaps you're a graduate student or a PI whose lab is funded by grants. If you're working on a multi-year grant, your advisory board meets several times a year to learn the state of the research conducted over previous months, upcoming research plans for the next semester, and more. Wouldn't it be ideal if you had a catalogue of all of the experiments run throughout the semester, their performance, strategies tried, and promising new areas to explore?
By serving as a single system of record Weights & Biases ensures that you never lose work.
W&B lets you log all of your data, experimental setup and design, trained models, and notes on model performance from you and your colleagues. They're all preserved.
With less than five lines of code you can not only build Reports to showcase findings quickly with colleagues at the same desk or on the other side of the world you can also create publication-ready graphics, charts and reports with ease.
Your data and different versions of it are preserved via our Artifacts functionality; any file type–image, audio, molecule, 3-D point cloud, model weights, etc.–is saved and versioned so that you're always using the correct data slice for each experiment.
Networked file saving glitches are a thing of the past as Weights & Biases houses your experiential outcomes in a secure, private environment allowing you to restore local files - to recreate your experiment's setup - with a few lines of code.
For resource-constrained situations in which you're sharing your GPUs across several projects you can even resume failed run from an intermediary point so that you mustn't return to the beginning of a model training experiment.
Run set
8
And none of this requires any setup nor on-going support from your university's IT staff members, cloud support contracts, or other external vendors.
Simply create a free W&B account at wandb.ai, click on 'Invite Team', choose 'Academic', and create your always-free-for-academics team.
Teaching or grading a machine learning course?
If you're teaching, administering, or running a machine or deep learning course Weights & Biases can assist you as well.
Much like how W&B serves as a single system of record for your experiments that you need to preserve when writing papers and analyzing study data W&B can help your students as well: the tool enables teamwork whether your students are on-campus or geographically distributed.
Students never lose work or precious hours of model training time due to failed network code saves or model crashes. They can learn how to monitor and asses their model's resource utilization.
Finally, the W&B Reports functionality makes student submissions a breeze to grade. Upon course completion students can easily modify the Reports that they submitted as assignments into portfolio pieces, while Weights & Biases single system of record helps students to see and measure their skills growth over the duration of the course and beyond.
Whether you're a PI, a grad student doing research in a lab, or an undergraduate student who is just embarking on their machine and deep learning journey Weights & Biases can help you: as a single system of record W&B acts as your research scribe, recording datasets and models, preserving the exact state in which your model was trained for future reference. Once model training is completed Reports allows you to quickly and easily share written findings as well as dynamic charts and figures with commenters, enabling collaboration regardless of where you and your co-authors and collaborators are located. Finally, W&B has no additional overhead for you, your IT staff, or students: it will remain free to use for academics, so sign up for your free account today and get started building field-transforming models!
Meta-Consolidation for Continual Learning (MERLIN)
A reproduction of the paper 'Meta-Consolidation for Continual Learning' by K J Joseph and Vineeth N Balasubramanian, accepted at the proceedings of Neural Information Processing Systems (NIPS 2020).
Reproducible Models with Weights & Biases
How Weights & Biases optimised my attempt for the ML Reproducibility Challenge 2020.
Add a comment
Iterate on AI agents and models faster. Try Weights & Biases today.