Skip to main content

Hyperparameter optimization with W&B

Created on February 15|Last edited on March 13

Visualize your models, track your results

Add a couple of lines of code to your machine learning model, and we'll show you a live dashboard of results. This page is an example report generated with W&B.

Here's how to use our tool:

  1. Add wandb logging to your model script. Quickstart →
  2. See metrics streamed to a W&B Dashboard. Free account →
  3. Optimize your model hyperparameters with Sweeps →

Results from one experiment




Baseline run
183


Run your first experiment

I took a simple PyTorch script example and added some lines of wandb to track results with this tool. Try cloning the repo and running the script yourself to see live results stream in to a Weights & Biases dashboard.

Get the code →

Taking my baseline run and starting a sweep

Next, I want to explore what hyperparameters will perform the best. I launch a sweep from W&B to try different combinations of learning rate, batch size, momentum, etc.

Learn more about sweeps →

Play with the interactive graphs below:

  1. Parallel Coordinates: Each line is a different run. See how the hyperparameters affect the output metric by selecting ranges on the axes.

  2. Scatter Plot: Explore how different batch sizes affected the accuracy of different versions of my model.

  3. Parameter importance: This table uses data from all the runs in my sweep and determines which hyperparameters affect my accuracy the most. Click the dropdown to see how hyperparameters affect different class accuracy metrics.

Results from a hyperparameter sweep




Hyperparameter sweep
156


Start tracking your models

It's easy to get started. Just add a few lines of code to access:

  • Rich visualizations in a live dashboard
  • Centralized results streamed from all your machines
  • Model versioning and reproducibility
  • Easy results sharing and collaboration

Sign up for a free account →