Developer-first
MLOps platform

Build better models faster with experiment tracking, dataset versioning, and model management

Play
Trusted by 100,000+ ML practitioners
01

Integrate quickly

Track, compare, and visualize ML experiments with 5 lines of code. Free for academic and open source projects.

try a live notebook
# Flexible integration for any Python script
import wandb
# 1. Start a W&B run
wandb.init(project='gpt3')
# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01
# Model training here
# 3. Log metrics over time to visualize performance
wandb.log({"loss": loss})
import wandb
# 1. Start a W&B run
wandb.init(project='gpt3')
# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01
# Model training here
# 3. Log metrics over time to visualize performance
with tf.Session() as sess:
# ...
wandb.tensorflow.log(tf.summary.merge_all())
import wandb
# 1. Start a new run
wandb.init(project="gpt-3")
# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01
# 3. Log gradients and model parameters
wandb.watch(model)
for batch_idx, (data, target) in
enumerate(train_loader):
if batch_idx % args.log_interval == 0:
# 4. Log metrics to visualize performance
wandb.log({"loss": loss})
import wandb
from wandb.keras import WandbCallback
# 1. Start a new run
wandb.init(project="gpt-3")
# 2. Save model inputs and hyperparameters
config = wandb.config
config.learning_rate = 0.01
... Define a model
# 3. Log layer dimensions and metrics over time
model.fit(X_train, y_train, validation_data=(X_test, y_test),
callbacks=[WandbCallback()])
import wandb
wandb.init(project="visualize-sklearn")
# Model training here
# Log classifier visualizations
wandb.sklearn.plot_classifier(clf, X_train, X_test, y_train, y_test, y_pred, y_probas, labels, model_name='SVC', feature_names=None)
# Log regression visualizations
wandb.sklearn.plot_regressor(reg, X_train, X_test, y_train, y_test,  model_name='Ridge')
# Log clustering visualizations
wandb.sklearn.plot_clusterer(kmeans, X_train, cluster_labels, labels=None, model_name='KMeans')
# 1. Import wandb and login
import wandb
wandb.login()
# 2. Define which wandb project to log to and name your run
wandb.init(project="gpt-3", run_name='gpt-3-base-high-lr')
# 3. Add wandb in your Hugging Face `TrainingArguments`
args = TrainingArguments(... , report_to='wandb')
# 4. W&B logging will begin automatically when your start training your Trainer
trainer = Trainer(... , args=args)
trainer.train()
import wandb
# 1. Start a new run
wandb.init(project="visualize-models", name="xgboost")
# 2. Add the callback
bst = xgboost.train(param, xg_train, num_round, watchlist, callbacks=[wandb.xgboost.wandb_callback()])
# Get predictions
pred = bst.predict(xg_test)
02

Visualize seamlessly

Add W&B's lightweight integration to your existing ML code and quickly get live metrics, terminal logs, and system stats streamed to the centralized dashboard.

Watch Demo
03

Collaborate in real time

Explain how your model works, show graphs of how model versions improved, discuss bugs, and demonstrate progress towards milestones.

View Reports

Designed for all use cases

Central dashboard

A system of record for your model results

Add a few lines to your script, and each time you train a new version of your model, you'll see a new experiment stream live to your dashboard.

Try it Out

Hyperparameter sweep

Try dozens of model versions quickly

Optimize models with our massively scalable hyperparameter search tool. Sweeps are lightweight, fast to set up, and plug in to your existing infrastructure for running models.

Try it Out

Artifact tracking

Lightweight model and dataset versioning

Save every detail of your end-to-end machine learning pipeline — data preparation, data versioning, training, and evaluation.

Try it Out

Collaborative documents

Explore results and share findings

It's never been easier to share project updates. Explain how your model works, show graphs of how  model versions improved, discuss bugs, and demonstrate progress towards milestones.

Try it Out

Collaboration

Seamlessly share progress across projects.

Manage team projects with a lightweight system of record. It's easy to hand off projects when every experiment is automatically well documented and saved centrally.

Try W&B

Reproduce results

Effortlessly capture configurations

With Weights & Biases experiment tracking, your team can standardize tracking for experiments and capture hyperparameters, metrics, input data, and the exact code version that trained each model

try w&b

Debug ML models

Focus your team on the hard machine learning problems

Let Weights & Biases take care of the legwork of tracking and visualizing performance metrics, example predictions, and even system metrics to identify performance issues.

try w&B

Transparency

Share updates across your organization

It's never been easier to share project updates. Explain how your model works, show graphs of how  model versions improved, discuss bugs, and demonstrate progress towards milestones.

try w&b

Governance

Protect and manage valuable IP

Use this central platform to reliably track all your organization's machine learning models, from experimentation to production. Centrally manage access controls and artifact audit logs, with a complete model history that enables traceable model results.

TRY W&B

Data provenance

Reliable records for auditing models

Capture all the inputs, transformations, and systems involved in building a production model. Safeguard valuable intellectual property with all the necessary context to understand and build upon models, even after team members leave.

TRY W&B

Organizational efficiency

Unlock productivity, accelerate research

With a well integrated pipeline, your machine learning teams move quickly and build valuable models in less time. Use Weights & Biases to empower your team to share insights and build models faster.

Try w&B

Data security

Install in private cloud and on-prem

Data security is a cornerstone of our machine learning platform. We support enterprise installations in private cloud and on-prem clusters, and plug in easily with other enterprise-grade tools in your machine learning workflow.

try W&B

Trusted by 100,000+ machine learning practitioners at
200+ companies and research institutions

See Case Study

"W&B was fundamental for launching our internal machine learning systems, as it enables collaboration across various teams."

Hamel Husain
GitHub

"W&B is a key piece of our fast-paced, cutting-edge, large-scale research workflow: great flexibility, performance, and user experience."

Adrien Gaidon
Toyota Research Institute

"W&B allows us to scale up insights from a single researcher to the entire team and from a single machine to thousands."

Wojciech Zaremba
Cofounder of OpenAI

Never lose track of another ML project. Try Weights & Biases today.

Featured projects

Once you’re using W&B to track and visualize ML experiments, it’s seamless to create a report to showcase your work.

VIEW GALLERY

Access the white paper

Read how building the right technical stack for your machine learning team supports core business efforts and safeguards IP

Oops! Something went wrong while submitting the form.

Stay connected with the ML community

Working on machine learning projects? We're bringing together ML practitioners from across industry and academia.

Forum

Join our Discourse community of machine learning practitioners

Join Our community

Podcast

Get a behind-the-scenes look at production ML with industry leaders

Listen to the latest episode

Webinar

Join virtual events and get insights on best practices for your ML projects

register for Our Webinars

YouTube

Watch videos about cool ML projects, interviews, W&B tips, and a whole lot more

WATCH our Videos

Never lose track of another ML project. Try W&B today.