The AI Developer Platform
Weights & Biases helps AI developers build better models faster. Quickly track experiments, version and iterate on datasets, evaluate model performance, reproduce models, and manage your ML workflows end-to-end.
The world’s leading ML teams trust W&B

The Weights & Biases platform helps you streamline your ML workflow from end to end
Experiments
Experiment tracking
Reports
Collaborative dashboards
Artifacts
Dataset and
model versioning
Tables
Interactive data visualization
Sweeps
Hyperparameter optimization
Launch
Automate ML workflows
Models
Model lifecycle management

LLM Monitoring
Observability for production ML
Prompts
LLMOps and prompt engineering

Weave
Interactive
ML app builder
Integrate quickly, track & version automatically
- Track, version and visualize with just 5 lines of code
- Reproduce any model checkpoints
- Monitor CPU and GPU usage in real time
“We’re now driving 50 or 100 times more ML experiments versus what we were doing before.”
import wandb
# 1. Start a W&B run
run = wandb.init(project="my_first_project")
# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01
# 3. Log metrics to visualize performance over time
for i in range(10):
run.log({"loss": loss})
import wandb
import os
# 1. Set environment variables for the W&B project and tracing.
os.environ["LANGCHAIN_WANDB_TRACING"] = "true" os.environ["WANDB_PROJECT"] = "langchain-tracing"
# 2. Load llms, tools, and agents/chains
llm = OpenAI(temperature=0)
tools = load_tools(["llm-math"], llm=llm)
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
# 3. Serve the chain/agent with all underlying complex llm interactions automatically traced and tracked
agent.run("What is 2 raised to .123243 power?")
import wandb
from llama_index import ServiceContext
from llama_index.callbacks import CallbackManager, WandbCallbackHandler
# initialise WandbCallbackHandler and pass any wandb.init args
wandb_args = {"project":"llamaindex"}
wandb_callback = WandbCallbackHandler(run_args=wandb_args)
# pass wandb_callback to the service context
callback_manager = CallbackManager([wandb_callback])
service_context = ServiceContext.from_defaults(callback_manager=
callback_manager)
import wandb
# 1. Start a new run
run = wandb.init(project="gpt5")
# 2. Save model inputs and hyperparameters
config = run.config
config.dropout = 0.01
# 3. Log gradients and model parameters
run.watch(model)
for batch_idx, (data, target) in enumerate(train_loader):
...
if batch_idx % args.log_interval == 0:
# 4. Log metrics to visualize performance
run.log({"loss": loss})
import wandb
# 1. Define which wandb project to log to and name your run
run = wandb.init(project="gpt-5",
run_name="gpt-5-base-high-lr")
# 2. Add wandb in your `TrainingArguments`
args = TrainingArguments(..., report_to="wandb")
# 3. W&B logging will begin automatically when your start training your Trainer
trainer = Trainer(..., args=args)
trainer.train()
import wandb
# 1. Start a new run
run = wandb.init(project="gpt4")
# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01
# Model training here
# 3. Log metrics to visualize performance over time
with tf.Session() as sess:
# ...
wandb.tensorflow.log(tf.summary.merge_all())
import wandb
from wandb.keras import (
WandbMetricsLogger,
WandbModelCheckpoint,
)
# 1. Start a new run
run = wandb.init(project="gpt-4")
# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01
... # Define a model
# 3. Log layer dimensions and metrics
wandb_callbacks = [
WandbMetricsLogger(log_freq=5),
WandbModelCheckpoint("models"),
]
model.fit(
X_train, y_train, validation_data=(X_test, y_test),
callbacks=wandb_callbacks,
)
import wandb
wandb.init(project="visualize-sklearn")
# Model training here
# Log classifier visualizations
wandb.sklearn.plot_classifier(clf, X_train, X_test, y_train, y_test, y_pred, y_probas, labels,
model_name="SVC", feature_names=None)
# Log regression visualizations
wandb.sklearn.plot_regressor(reg, X_train, X_test, y_train, y_test, model_name="Ridge")
# Log clustering visualizations
wandb.sklearn.plot_clusterer(kmeans, X_train, cluster_labels, labels=None, model_name="KMeans")
import wandb
from wandb.xgboost import wandb_callback
# 1. Start a new run
run = wandb.init(project="visualize-models")
# 2. Add the callback
bst = xgboost.train(param, xg_train, num_round, watchlist, callbacks=[wandb_callback()])
# Get predictions
pred = bst.predict(xg_test)

Visualize your data and uncover critical insights
- Visualize live metrics, datasets, logs, code, and system stats in a centralized location
- Analyze collaboratively across your team to uncover key insights
- Compare side-by-side to debug easily, and build iteratively
“Saving everything in your model pipelines is essential for serious machine learning: debugging, provenance, reproducibility. W&B is a great tool for getting this done.”
Improve performance so you can evaluate and deploy with confidence
- Experiment collaboratively to find the best model
- Evaluate models, discuss bugs, and demonstrate progress
- Inform stakeholders with configurable reports
W&B allows us to scale up insights from a single researcher to the entire team and from a single machine to thousands.

The Weights & Biases ecosystem
Manage your entire ML lifecycle with a unified interface over any ML infrastructure
- Pytorch
- XGBoost
- HuggingFace
- TensorFlow
- OpenAI Models
- OpenCV
- Sagemaker
- Azure ML
- Run.ai
- Vertex AI
- NVIDIA DGX
- Anyscale
- Airflow
- Github Actions
- Metaflow
- Kubeflow