Skip to main content

Dashboard: Track and compare experiments, visualize results

Use the Weights & Biases Dashboard as a central place to organize and visualize results from your machine learning models.
Created on August 21|Last edited on August 21

W&B is the central repository for your model pipelines

Save everything you need to compare and reproduce models — architecture, hyperparameters, weights, model predictions, GPU usage, git commits, and even datasets. You can save experiment & dataset files directly to W&B or store pointers to your own storage.

image.png

Explore a live dashboard

Instrumenting a model to track experiments in W&B is simple.

In this report, we'll walk you through the main pieces of our API.

  • wandb.init — initialize a new run at the top of your training script. Each run is single execution of the training script.
  • wandb.config — track hyperparameters
  • wandb.log — log metrics over time within your training loop

If you have any questions, we'd love to answer them.

1. Initialize a new run with wandb.init()

You should generally call wandb.init() once at the start of your training script. This will create a new run and launch a single background process to sync the data to our cloud.

wandb.init() accepts a few keyword arguments:

  • name — A display name for this run, which shows up in the UI and is editable, doesn't have to be unique
  • config — a dictionary-like object to set as initial config
  • project — the name of the project to which this run will belong
  • entity — the team posting this run (default: your username or your default team)
  • tags — a list of strings to associate with this run as tags
  • group — a string by which to group other runs; see Grouping

Checkout the docs for the full parameter list.

2. Track hyperparameters with wandb.config

Set wandb.config once at the beginning of your script to save your training config: hyperparameters, input settings like dataset name or model type, and any other independent variables for your experiments. You'll be able to group by config values in the web interface, comparing the settings of different runs and seeing how these affect the output.

Note that output metrics or dependent variables (like loss and accuracy) should be saved with wandb.loginstead.

Try in a colab →

- Set config variables

wandb.config.epochs = 4
wandb.config.batch_size = 32
# Use configs to add a unique hash identifier for your dataset
wandb.config.dataset = 'ab131'

# you can also initialize your run with a config
wandb.init(config={"epochs": 4})

- Update config variables

wandb.config.update({"epochs": 4, "batch_size": 32})

- Argparse Flags

You can pass in the arguments dictionary from argparse. This is convenient for quickly testing different hyperparameter values from the command line.

wandb.init()
wandb.config.epochs = 4
    
parser = argparse.ArgumentParser()
parser.add_argument('-b', '--batch-size', type=int, default=8, metavar='N',
                         help='input batch size for training (default: 8)')
args = parser.parse_args()
wandb.config.update(args) # adds all of the arguments as config variables

Checkout the docs for more options.

3. Log metrics over time within your training loop with wandb.log()

Next we'll look at how to visualize a model's predictions with Weights & Biases – images, videos, audio, tables, HTML, metrics, plots, 3d objects and point clouds.

Metrics




Run set
1


Plots




Run set
1


Images




Run set
1


Videos




Run set
1


Audio




Run set
1


3D Objects




Run set
1


Point Clouds




Run set
1


HTML




Run set
1


Incremental Logging




Run set
2


W&B Dashboard – The Key Features

Persistent and Centralized

Anywhere you train your models, whether on your local machine, your lab cluster, or spot instances in the cloud, we give you the same centralized dashboard. You don't need to spend your time copying outputs from your terminal into a spreadsheet or organizing TensorBoard files from different machines.

Automatic Organization

If you hand off a project to a collaborator or take a vacation, W&B makes it easy to see all the models your team has already tried so you're not wasting hours re-running old experiments.

Powerful Table

Compare each training run and see what hyperparameters changed. Search, filter, sort, and group results from different models. It's easy to look over thousands of model versions and find the best performing models for different tasks.

Reproduce Models

Weights & Biases is good for experimentation, exploration, and reproducing models later. We capture not just the metrics, but also the hyperparameters and version of the code, and we can save your model checkpoints for you so your project is reproducible.

Fast, Flexible Integration

Add W&B to your project in 5 minutes. Install our free open-source Python package and add a couple of lines to your code, and every time you run your model you'll have nice logged metrics and records.

Tools for Collaboration

Use W&B to organize complex machine learning projects. It's easy to share a link to W&B, and you can use private teams to have everyone sending results to a shared project. We also support collaboration via reports— add interactive visualizations and describe your work in markdown. This is a great way to keep a work log, share findings with your supervisor, or present findings to your lab.