An Entire ML Toolbox
in 5 Lines of Code

We’re proud to partner with NVIDIA’s Base Command Platform. Contact us to get started or head here to read more. 

W&B lets you debug your models, track your experiments, optimize hyperparameters, reproduce your best runs, and a whole lot more.

W&B is DGX-ready, SOC-2 compliant, and trusted by more than 100,000 ML practitioners from some of the most innovative organizations in the world:

“W&B was fundamental for launching our internal machine learning systems, as it enables collaboration across various teams.”
Hamel Husain
GitHub
“W&B is a key piece of our fast-paced, cutting-edge, large-scale research workflow: great flexibility, performance, and user experience.”
Adrien Gaidon
Toyota Research Institute
“W&B allows us to scale up insights from a single researcher to the entire team and from a single machine to thousands.”
Wojciech Zaremba
Cofounder of OpenAI

BCP and Weights & Biases complement each other perfectly.

BCP unlocks the world class compute you need to train the large, cutting-edge models of tomorrow.‍

Weights & Biases gives you the vital insights you need to understand how your models are performing on that compute infrastructure. It gives your team a single tool to experiment, debug, reproduce, and collaborate on your best models, letting you push those models to production faster.
				
					# Flexible integration for any Python script

import wandb


# 1. Start a W&B run

wandb.init(project='gpt3')


# 2. Save model inputs and hyperparameters

config = wandb.config

config.learning_rate = 0.01


# Model training here


# 3. Log metrics over time to visualize performance

wandb.log({"loss": loss})
				
			
				
					import wandb

# 1. Start a W&B run

wandb.init(project='gpt3')


# 2. Save model inputs and hyperparameters

config = wandb.config

config.learning_rate = 0.01


# Model training here


# 3. Log metrics over time to visualize performance

with tf.Session() as sess:

	# ...
	wandb.tensorflow.log(tf.summary.merge_all())
				
			
				
					import wandb

# 1. Start a new run

wandb.init(project="gpt-3")


# 2. Save model inputs and hyperparameters

config = wandb.config

config.learning_rate = 0.01


# 3. Log gradients and model parameters

wandb.watch(model)

for batch_idx, (data, target) in

enumerate(train_loader):


if batch_idx % args.log_interval == 0:

# 4. Log metrics to visualize performance

wandb.log({"loss": loss})

				
			
				
					import wandb

from wandb.keras import WandbCallback

# 1. Start a new run

wandb.init(project="gpt-3")


# 2. Save model inputs and hyperparameters

config = wandb.config

config.learning_rate = 0.01


... Define a model


# 3. Log layer dimensions and metrics over time

model.fit(X_train, y_train, validation_data=(X_test, y_test),

callbacks=[WandbCallback()])

				
			
				
					import wandb


wandb.init(project="visualize-sklearn")


# Model training here


# Log classifier visualizations

wandb.sklearn.plot_classifier(clf, X_train, X_test, y_train,
 y_test, y_pred, y_probas, labels, model_name='SVC', 
feature_names=None)


# Log regression visualizations

wandb.sklearn.plot_regressor(reg, X_train,
 X_test, y_train, y_test,  model_name='Ridge')
 

# Log clustering visualizations

wandb.sklearn.plot_clusterer(kmeans, X_train, cluster_labels, labels=None, model_name='KMeans')

				
			
				
					# 1. Import wandb and login

import wandb
wandb.login()

# 2. Define which wandb project to log to and name your run

wandb.init(project="gpt-3", run_name='gpt-3-base-high-lr')


# 3. Add wandb in your Hugging Face `TrainingArguments`

args = TrainingArguments(... , report_to='wandb')


# 4. W&B logging will begin automatically when your start training your Trainer

trainer = Trainer(... , args=args)

trainer.train()

				
			
				
					import wandb

# 1. Start a new run

wandb.init(project="visualize-models",
name="xgboost")


# 2. Add the callback

bst = xgboost.train(param, xg_train, num_round,
watchlist, callbacks=
[wandb.xgboost.wandb_callback()])


# Get predictions

pred = bst.predict(xg_test)

				
			

Track experiments in real time

See live updates on model performance, check for overfitting, and visualize how a model performs on different classes.

COVID-19 main protease in complex N3 (left) and COVID-19 main protease in complex with Z31792168 (right) from “Visualizing Molecular Structure with Weights & Biases” by Nicholas Bardy

Understand every step of your pipeline

Get a bird’s eye view of every step of model development, understand model and dataset dependencies and automatically checksum and version datasets and models.

Discover your best runs faster

W&B’s visualizations and dashboard let you explore the space of possible models quickly, without getting bogged down setting up manual visualizations.

The Science of Debugging with W&B Reports

By Sarah Jane of Latent Space
We use Weights & Biases as a way to share results and learnings such that we can build on top of each other’s work. The W&B Reports feature has been one of the most critical…

Collaborate across large teams with ease

Customize real-time views of model training and evaluation, share automatically updating dashboards, and create interactive reports to share with stakeholders.

About Weights & Biases

Our mission is to build the best tools for machine learning. Use W&B for experiment tracking, dataset versioning, and collaborating on ML projects.

Never lose track of another ML project. Try W&B today.