Weights and Biases raises $45m to build better tools for ML practitioners

Publish your model insights with interactive plots for performance metrics, predictions, and hyperparameters. Made by Lukas Biewald using Weights & Biases
Lukas Biewald

Weights and Biases just raised $45m from Insight Partners. I’m blown away by how fast we’ve grown and I wanted to take a moment to write down why Shawn, Chris and I decided to start this company and where we go from here.

At Weights and Biases our mission is simple: to build great tools for ML practitioners. We can do this uniquely well because 1) we were once ML practitioners and 2) we have been building tools for ML practitioners for the last twenty years and 3) we are constantly listening carefully to the needs of ML practitioners.

We aspire to build reliable tools with simple integrations but no magic. We want to build tools that noticeably improve the day-to-day lives of ML practitioners, while helping teams improve collaboration and transparency.

Before I go any further - if you haven’t tried our experiment tracking or hyperparameter search tools, you should! With less than five lines of integration code you can track, visualize and share all of your ML experiments. You can try out a sample dashboard to see what this looks like or find tons of examples in our gallery.

Why we love to build products for ML practitioners

I believe that Machine Learning has the potential to be a transformational force for good. We hear a lot of lofty claims about what ML can do. You may not believe all the lofty claims about autonomous vehicles, but you can’t deny that Blueriver’s work to reduce pesticide use with computer vision or Deep Mind’s recent breakthroughs in protein folding show that ML algorithms have the potential to enrich our lives, help with environmental sustainability and cure diseases.

One of the biggest reasons I’m optimistic about machine learning is that I’ve worked with the ML community for the last two decades and, by and large, they are an incredibly thoughtful and idealistic group of people—excited about building things to help humanity and positively impact the world.

Really great tools for ML practitioners can both accelerate the creation of useful machine learning models and help ensure that machine learning is a force for good.

openai.png Product feedback session with Peter Welinder at OpenAI

Why machine learning tools are important right now

Machine learning has gone from mostly a research topic to a technology that we interact with daily that touches nearly every industry on the planet, in under a decade. Production-oriented tools and workflows have not caught up. Software has had over fifty years to refine workflows and developer tools primarily for stability and still we encounter bugs on a daily basis.

Machine learning, especially deep learning, looks like software in that it runs on a computer, but it breaks a lot of the assumptions that we’re used to in the software world. Unlike traditional software, machine learning models don’t have logic that can be easily explained. Unlike traditional software, machine learning models are huge and don’t diff incrementally, so software-style version control doesn’t work well. Unlike traditional software, machine learning models are best built on exotic chips, GPUs and now TPUs, breaking the entire stack of software abstractions—hardware, operating system, library application—at its lowest level. The list goes on.

In practice what this means is that most responsible machine learning practitioners and teams trying to build safe, reliable, auditable production systems end up building their own ad-hoc tools for things that were solved for software applications decades ago. For example, there might not be a company on the planet deploying production software without some kind of version control. But most Fortune 500 machine learning teams are not using systematic version control, which makes it surprisingly hard to trace what data a model was trained on or even what model was running in production at a given point in time.

This is more than an inconvenience for model builders. Ultimately, when models aren’t systematically tracked, it’s impossible to make sure they’re safe.

Why Weights and Biases tools are unique

I believe that we can solve these problems uniquely well.

The first set of companies in MLOps focused on a top down sale, convincing engineering leaders that they could solve their reproducibility and reliability issues with an end-to-end platform. As long as the entire team bought into the platform and process a lot of improvements are possible.

But great machine learning practitioners feel constrained by an end-to-end platform. Machine learning best practices change quickly and no one company can solve every problem. The successful software developer companies of the last few decades like Github and Data Dog don’t work like this. They solve a specific problem and focus on a great experience for developers, not executives. At the end of the day a software executive’s biggest pain point is keeping their great developers happy. In machine learning, this is even more true. If your company doesn’t use tools that machine learning practitioners love, you have no chance of having a great machine learning team. That’s why I don’t think a top down sale can work in MLOps and in the long run, these companies won’t succeed.

The next set of companies in MLOps look to me like products built by devops teams for devops teams. They assume a detailed understanding of how hardware and low level software works.

I feel the most proud of our tool when I see us supporting and systematizing the informal collaboration patterns already emerging in ML teams - one example is laid out in The Science of Debugging with W&B Reports where Sarah Jane goes into detail on how Latent Space uses our tools to build better models.

Where we go from here

There is a burning need right now for better tools. Our financing allows us to address these needs as quickly as we possibly can while building high quality software.

Weights and Biases started out as an experiment tracking tool, and we are really proud of how many people have come to rely on it. At the same time, our users are constantly asking us to solve other pain points in their day-to-day work.

We want to address as many issues in an ML practitioner’s workflow as we can. Last year we launched sweeps, a lightweight hyperparameter optimization tool, and we launched artifacts, a versioning system that can be used to track models and datasets. This year we plan to launch a model evaluation and prediction visualization tool and we have a whole lot more in the works.

We're the early market leader because of our relentless drive to understand our customers and make something useful. If you're interested in joining us, trying our tools, or have any feedback on our tools, we would love to talk to you. A couple places you can get started: