The developer-first MLOps platform
Integrate quickly,
track & version automatically
- Track, version and visualize with just 5 lines of code
- Reproduce any model checkpoints
- Monitor CPU and GPU usage in real time
# 1. Start a W&B run
run = wandb.init(project="my_first_project")
# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01
# Model training here
# 3. Log metrics to visualize performance over time
for i in range(10):
run.log({"loss": loss})
# 1. Start a new run
run = wandb.init(project="gpt4")
# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01
# Model training here
# 3. Log metrics to visualize performance over time
with tf.Session() as sess:
# ...
wandb.tensorflow.log(tf.summary.merge_all())
run = wandb.init(project="gpt5")
config = run.config
config.dropout = 0.01
run.watch(model)
for batch_idx, (data, target) in enumerate(train_loader):
...
if batch_idx % args.log_interval == 0:
# 4. Log metrics to visualize performance
run.log({"loss": loss})
from wandb.keras import (
WandbMetricsLogger,
WandbModelCheckpoint,
)
# 1. Start a new run
run = wandb.init(project="gpt-4")
# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01
... # Define a model
# 3. Log layer dimensions and metrics
wandb_callbacks = [
WandbMetricsLogger(log_freq=5),
WandbModelCheckpoint("models"),
]
model.fit(
X_train, y_train, validation_data=(X_test, y_test),
callbacks=wandb_callbacks,
)
wandb.init(project="visualize-sklearn")
# Model training here
# Log classifier visualizations
wandb.sklearn.plot_classifier(clf, X_train, X_test, y_train, y_test, y_pred, y_probas, labels,
model_name="SVC", feature_names=None)
# Log regression visualizations
wandb.sklearn.plot_regressor(reg, X_train, X_test, y_train, y_test, model_name="Ridge")
# Log clustering visualizations
wandb.sklearn.plot_clusterer(kmeans, X_train, cluster_labels, labels=None, model_name="KMeans")
# 1. Define which wandb project to log to and name your run
run = wandb.init(project="gpt-5", run_name="gpt-5-base-high-lr")
# 2. Add wandb in your `TrainingArguments`
args = TrainingArguments(..., report_to="wandb")
# 3. W&B logging will begin automatically when your start training your Trainer
trainer = Trainer(..., args=args)
trainer.train()
from wandb.xgboost import wandb_callback
# 1. Start a new run
run = wandb.init(project="visualize-models")
# 2. Add the callback
bst = xgboost.train(param, xg_train, num_round, watchlist, callbacks=[wandb_callback()])
# Get predictions
pred = bst.predict(xg_test)
Visualize your data and
uncover critical insights
- Visualize live metrics, datasets, logs, code, and system stats in a centralized location
- Analyze collaboratively across your team to uncover key insights
- Compare side-by-side to debug easily, and build iteratively

Improve performance so you can
evaluate and deploy with confidence
- Experiment collaboratively to find the best model
- Evaluate models, discuss bugs, and demonstrate progress
- Inform stakeholders with configurable reports

The Weights & Biases ecosystem
The leading ML platform that provides value to your entire team
The user experience that makes redundant work disappear
Track every detail of your ML pipeline automatically. Visualize results with relevant context. Drag & drop analysis to uncover insights – your next best model is just a few clicks away

The ML workflow co-designed with ML engineers
Build streamlined ML workflows incrementally. Configure and customize every step. Leverage intelligent defaults so you don’t have to reinvent the wheel.

A system of record that makes all histories reproducible and discoverable
Reproduce any experiment instantly. Track model evolution with changes explained along the way. Easily discover and build on top of your team’s work.

Flexible deployments,
easy integration
Deploy W&B to your infrastructure of choice, W&B-managed or Self-managed available. Easily integrate with your ML stack & tools with no vendor lock-in.

Bridge ML Practitioners
and MLOps
Automate and scale ML workloads in one collaborative interface - ML practitioners get the simplicity, MLOps get the visibility.
Scale ML production with governance
A centralized system of record for all your ML projects. Manage model lifecycle and CI/CD to accelerate production. Understand model evolution and explain business impact to leadership.

Deliver ROI in the real world
Accelerate innovation to market and deliver ongoing business impact. W&B enables running 1000s of experiments iteratively and collaboratively, all while continuously optimizing every part of your ML system over time.

Any industry, any use case
Customers from diverse industries trust W&B with a variety of ML use cases. From autonomous vehicle to drug discovery and from customer support automation to generative AI, W&B’s flexible workflow handles all your custom needs.

Let the team focus on
value-added activities
Only focuses on core ML activities – W&B automatically take care of boring tasks for you: reproducibility, auditability, infrastructure management, and security & governance.
Future-proof your ML workflow – W&B co-designs with OpenAI and other innovators to encode their secret sauce so you don’t need to reinvent the wheel.


Trusted by 500,000+ machine learning practitioners at 700+ companies and research institutions
View our case studies →Featured content
MLOps Whitepaper
