The world’s leading ML teams trust us

Integrate quickly,
track & version automatically

  • Track, version and visualize with just 5 lines of code
  • Reproduce any model checkpoints
  • Monitor CPU and GPU usage in real time
“We're now driving 50 or 100 times more ML experiments versus what we were doing before.”
Phil Brown, Director of Applications
Graphcore
import wandb

# 1. Start a W&B run
run = wandb.init(project="my_first_project")

# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01

# Model training here
# 3. Log metrics to visualize performance over time

for i in range(10):
 run.log({"loss": loss})
import wandb
# 1. Start a new run
run = wandb.init(project="gpt4")

# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01

# Model training here
# 3. Log metrics to visualize performance over time

with tf.Session() as sess:
# ...
wandb.tensorflow.log(tf.summary.merge_all())
import wandb
# 1. Start a new run
run = wandb.init(project="gpt5")
# 2. Save model inputs and hyperparameters
config = run.config
config.
dropout = 0.01
# 3. Log gradients and model parameters
run.watch(model)
for batch_idx, (data, target) in enumerate(train_loader):   
...
   
if batch_idx % args.log_interval == 0:  
   # 4. Log metrics to visualize performance
      run.log({"loss": loss})
import wandb
from wandb.keras import (
   WandbMetricsLogger,
   WandbModelCheckpoint,
)

# 1. Start a new run
run
= wandb.init(project="gpt-4")

# 2. Save model inputs and hyperparameters
config = wandb.config
config
.learning_rate = 0.01
...  # Define a model
# 3. Log layer dimensions and metrics
wandb_callbacks = [
   WandbMetricsLogger(log_freq
=5),
   WandbModelCheckpoint(
"models"),
]
model.
fit(
   X_train, y_train, validation_data
=(X_test, y_test),
   callbacks
=wandb_callbacks,
)
import wandb
wandb.init(project="visualize-sklearn")

# Model training here
# Log classifier visualizations

wandb.sklearn.plot_classifier(clf, X_train, X_test, y_train, y_test, y_pred, y_probas, labels,
model_name="SVC", feature_names=None)

# Log regression visualizations
wandb.sklearn.plot_regressor(reg, X_train, X_test, y_train, y_test,  model_name="Ridge")

# Log clustering visualizations
wandb.sklearn.plot_clusterer(kmeans, X_train, cluster_labels, labels=None, model_name="KMeans")
import wandb

# 1. Define which wandb project to log to and name your run
run = wandb.init(project="gpt-5", run_name="gpt-5-base-high-lr")

# 2. Add wandb in your `TrainingArguments`
args = TrainingArguments(..., report_to="wandb")

# 3. W&B logging will begin automatically when your start training your Trainer
trainer = Trainer(..., args=args)
trainer.train()
import wandb
from wandb.xgboost import wandb_callback

# 1. Start a new run
run = wandb.init(project="visualize-models")

# 2. Add the callback
bst = xgboost.train(param, xg_train, num_round, watchlist, callbacks=[wandb_callback()])

# Get predictions
pred = bst.predict(xg_test)

Visualize your data and
uncover critical insights

  • Visualize live metrics, datasets, logs, code, and system stats in a centralized location
  • Analyze collaboratively across your team to uncover key insights
  • Compare side-by-side to debug easily, and build iteratively
“Saving everything in your model pipelines is essential for serious machine learning: debugging, provenance, reproducibility. W&B is a great tool for getting this done.”
Richard Socher, fmr Chief Data Scientist
Salesforce

Improve performance so you can
evaluate and deploy with confidence

  • Experiment collaboratively to find the best model
  • Evaluate models, discuss bugs, and demonstrate progress
  • Inform stakeholders with configurable reports
W&B allows us to scale up insights from a single researcher to the entire team and from a single machine to thousands.
Wojciech Zaremba, Co-Founder
OpenAI

The Weights & Biases ecosystem

Manage your entire ML lifecycle with a unified interface over any ML infrastructure
ML framework
Pytorch
XGBoost
HuggingFace
TensorFlow
GPT-3
spaCy
& 19,000+ ML libraries and repos
Training environment
Sagemaker
Azure ML
Run.ai
Vertex AI
NVIDIA DGX
Anyscale
Workflow orchestration
Airflow
Github Actions
Metaflow
Kubeflow
Jenkins
Flyte
Astronomer
Inference environment
Sagemaker
Azure ML
Run.ai
Vertex AI
NVIDIA DGX
OctoML

The leading ML platform that provides value to your entire team

FOR ML PRACTITIONERS

The user experience that makes redundant work disappear

Track every detail of your ML pipeline automatically. Visualize results with relevant context. Drag & drop analysis to uncover insights – your next best model is just a few clicks away

FOR ML PRACTITIONERS

The ML workflow co-designed with ML engineers

Build streamlined ML workflows incrementally. Configure and customize every step. Leverage intelligent defaults so you don’t have to reinvent the wheel.

FOR ML PRACTITIONERS

A system of record that makes all histories reproducible and discoverable

Reproduce any experiment instantly. Track model evolution with changes explained along the way. Easily discover and build on top of your team’s work.

FOR MLOps

Flexible deployments,
easy integration

Deploy W&B to your infrastructure of choice, W&B-managed or Self-managed available. Easily integrate with your ML stack & tools with no vendor lock-in.

See all deployment options →
See W&B partners & integrations →
FOR MLOps

Bridge ML Practitioners
and MLOps

Automate and scale ML workloads in one collaborative interface - ML practitioners get the simplicity, MLOps get the visibility.

FOR MLOps

Scale ML production with governance

A centralized system of record for all your ML projects. Manage model lifecycle and CI/CD to accelerate production. Understand model evolution and explain business impact to leadership.

Read our W&B MLOps Whitepaper →
FOR ML LEADERS

Deliver ROI in the real world

Accelerate innovation to market and deliver ongoing business impact. W&B enables running 1000s of experiments iteratively and collaboratively, all while continuously optimizing every part of your ML system over time.

FOR ML LEADERS

Any industry, any use case

Customers from diverse industries trust W&B with a variety of ML use cases. From autonomous vehicle to drug discovery and from customer support automation to generative AI, W&B’s flexible workflow handles all your custom needs.

FOR ML LEADERS

Let the team focus on
value-added activities

Only focuses on core ML activities – W&B automatically take care of boring tasks for you: reproducibility, auditability, infrastructure management, and security & governance.

Future-proof your ML workflow – W&B co-designs with OpenAI and other innovators to encode their secret sauce so you don’t need to reinvent the wheel.

Never lose track of
another ML project

Trusted by 500,000+ machine learning practitioners at 700+ companies and research institutions

View our case studies →

"W&B was fundamental for launching our internal machine learning systems, as it enables collaboration across various teams."

Hamel Husain
GitHub

"W&B is a key piece of our fast-paced, cutting-edge, large-scale research workflow: great flexibility, performance, and user experience."

Adrien Gaidon
Toyota Research Institute

"W&B allows us to scale up insights from a single researcher to the entire team and from a single machine to thousands."

Wojciech Zaremba
Cofounder of OpenAI

Featured content

AlphaFold-ed proteins in W&B Tables

AlphaFold-ed proteins in W&B Tables

Emmy-nominated Visual FX with W&B

Emmy-nominated Visual FX with W&B

A Deep Dive Into OpenCLIP from OpenAI

A Deep Dive Into OpenCLIP from OpenAI

Making My Kid a Jedi Master With Stable Diffusion and Dreambooth

Making My Kid a Jedi Master With Stable Diffusion and Dreambooth

Lyft's High-Capacity End-to-End Camera-Lidar Fusion for 3D Detection

Lyft's High-Capacity End-to-End Camera-Lidar Fusion for 3D Detection

How To Build an Efficient NLP Model

How To Build an Efficient NLP Model

Jensen Huang — NVIDIA's CEO on the Next Generation of AI and MLOps

Emad Mostaque — Stable Diffusion, Stability AI, and What’s Next

Boris Dayma — The Story Behind DALL-E mini, the Viral Phenomenon

MLOps whitepaper

Read how building the right technical stack for your machine learning team supports core business efforts and safeguards IP
Oops! Something went wrong while submitting the form.

Stay connected with the ML community

Working on machine learning projects? We're bringing together ML practitioners from across industry and academia.

Community

Join our community of machine learning practitioners.

Podcast

Go behind the scenes with ML industry leaders.

Webinar

Sign up for our virtual events to learn best practices for your ML projects.

YouTube

Watch videos about cool ML projects, interviews, W&B tips, and much more!

Try Weights & Biases