Weights & Biases resource library
Below, you’ll find everything from case studies and tutorials to podcasts and free ML courses. If you’re new to our platform, we recommend checking out our end-to-end demo to learn how Weights & Biases can help at every level of the model development and deployment cycle.
Interactive machine learning courses from Weights & Biases
LLM Engineering: Structured Outputs
Model CI/CD
Training and Fine-tuning Large Language Models (LLMs)
Building LLM-Powered Applications
W&B 101: Getting Started
Effective MLOps: Data Validation for ML
Effective MLOPs: CI/CD for ML
Effective MLOps: Model Development
MLOps Whitepaper
Customer success stories
How Microsoft Leveraged Weights & Biases to Build the Models Behind Ink
“We were drawn to W&B because we realized our existing approach just didn’t work with a remote team. W&B is a much better home for our experimentation results. Plus it’s super easy to use. ”
Lyft’s High-Capacity End-to-End Camera-Lidar Fusion for 3D Detection
“[With Weights & Biases] we demonstrated our workflow in training high-capacity models, reducing overfitting while increasing model capacity, and maintaining fast iteration speed.”
Toyota Research Institute Tracks Experiments using Weights & Biases
“Weights & Biases is a key piece of our fast-paced, cutting-edge, large-scale research workflow: great flexibility, performance, and user experience.”
AI for AG: Production Machine Learning for Agriculture
“To monitor and evaluate our machine learning runs, we have found the Weights & Biases platform to be the best solution. Their API makes it fast to integrate W&B logging into an existing codebase.”
How Woven Leverages W&B to Drive Continuous Learning
“Experiment tracking has given us 10x velocity and enabled us to share results with each other much faster, with tractability and traceability”
Making Simulations More Human with Inverted AI
“We got to the point where we had so many models and data versions that we simply couldn’t manually keep track of all of them. Once we started taking advantage of Artifacts, it’s been very helpful.”
How Socure Fights Fraud with Machine Learning
“Weights & Biases gave our team a full and complete understanding of our model’s lineage, from datasets to training to production artifacts. We saw a 15% increase in our model building efficiency while saving about 15% on hardware spend on top of that.”
Leveraging AI for Visual FX at MARZ
“Once people started seeing the value of W&B, it kind of just exploded, and everyone on the ML team now builds everything on it in the company.”
Designing ML Models for Millions of Consumer Robots
“The last mile of deploying machine learning to production is really long. So having a team and tools [like W&B] dedicated to focusing on just how hard that last mile is has really paid off.”
LLM whitepaper
Companies like OpenAI and Stability rely on Weights & Biases to train their generative models. In this whitepaper, you’ll learn how to fine-tune and prompt engineer the right model for your use case.
Listen to Gradient Dissent, our podcast with ML pioneers
Jensen Huang — NVIDIA’s CEO on the Next Generation of AI and MLOps
Emad Mostaque — Stable Diffusion, Stability AI, and What’s Next
Boris Dayma — The Story Behind DALL-E mini, the Viral Phenomenon
See Weights & Biases in action on our blog, Fully Connected
AlphaFold-ed Proteins in W&B Tables
Prompt Engineering LLMs with LangChain and W&B
How to Run LLMs Locally
WandBot: How We Built a GPT-4-Powered Chat Support for W&B
A Recipe for Training Large Models
Improving Generative Images with Instructions: Prompt-to-Prompt Image Editing
What is CI/CD for Machine Learning?
What is your current MLOps maturity?
The Weights & Biases platform helps you streamline your workflow from end to end
Models
The Weights & Biases platform helps you streamline your workflow from end to end
Models
Experiments
Track and visualize your ML experiments
Sweeps
Optimize your hyperparameters
Registry
Publish and share your ML models and datasets
Automations
Trigger workflows automatically
Weave
Traces
Explore and
debug LLMs
Evaluations
Rigorous evaluations of GenAI applications