Skip to main content

1. How do I log a model in W&B

Build better models faster with with experiment tracking, dataset versioning, and model management.
Created on September 8|Last edited on September 14
Intro should explain WHAT we log (everything!)
Tracking your models is annoying and historically done in Google sheets. With a few lines of code, W&B helps you save everything you need to debug, compare and reproduce your models — architecture, hyperparameters, git commits, model weights, GPU usage, and even datasets and predictions.
See live updates on model performance, check for overfitting, and visualize how a model performs on different classes. Get the most out of your GPUs and identify opportunities for optimizing hardware utilization.
1️⃣. Start a new run and pass in hyperparameters to track
2️⃣. Log metrics from training or evaluation
3️⃣. Visualize results in the dashboard. Visualize model metrics, gradients, GPU utilization, images, video, audio and more.
import wandb

config = dict(learning_rate=0.01, momentum=0.2, architecture="CNN", infra="AWS")

# 1️⃣ Start a new run to track this script
wandb.init(project="detect-pedestrians", ntags=["baseline", "paper1"], config=config)

# 2️⃣ Log metrics from your script to W&B
wandb.log({"loss": 0.314, "epoch": 5, "inputs": wandb.Image(inputs),
"logits": wandb.Histogram(ouputs)})

One Model
1



Capture the code

Save the most recent git commit, the command and args, and system hardware setup.
W&B also saves a patch file with uncommitted changes so you can reproduce the exact code that trained the model.

Run set
2