Hyperparameter optimization with W&B
Visualize your models, track your results
Add a couple of lines of code to your machine learning model, and we'll show you a live dashboard of results. This page is an example report generated with W&B.
Here's how to use our tool:
- Add
wandblogging to your model script. Quickstart → - See metrics streamed to a W&B Dashboard. Free account →
- Optimize your model hyperparameters with Sweeps →
Results from one experiment
Run your first experiment
I took a simple PyTorch script example and added some lines of wandb to track results with this tool. Try cloning the repo and running the script yourself to see live results stream in to a Weights & Biases dashboard.
Taking my baseline run and starting a sweep
Next, I want to explore what hyperparameters will perform the best. I launch a sweep from W&B to try different combinations of learning rate, batch size, momentum, etc.
Play with the interactive graphs below:
-
Parallel Coordinates: Each line is a different run. See how the hyperparameters affect the output metric by selecting ranges on the axes.
-
Scatter Plot: Explore how different batch sizes affected the accuracy of different versions of my model.
-
Parameter importance: This table uses data from all the runs in my sweep and determines which hyperparameters affect my accuracy the most. Click the dropdown to see how hyperparameters affect different class accuracy metrics.
Results from a hyperparameter sweep
Start tracking your models
It's easy to get started. Just add a few lines of code to access:
- Rich visualizations in a live dashboard
- Centralized results streamed from all your machines
- Model versioning and reproducibility
- Easy results sharing and collaboration
