Multi-GPU Hyperparameter Sweeps in Three Simple Steps
Start using hyperparameter sweeps easily with our lightweight integration
Created on September 19|Last edited on September 19
Comment
Hyperparameter sweeps are ways to automatically test different configurations of your model. They address a wide range of needs, including running experiments with different test conditions, exploration of your dataset, or large scale tuning hyperparameters.
Setting up the infrastructure for these sweeps can be tedious. So we've built W&B sweeps to be simple to set up and flexible to deploy. Inspired by Google's Vizier, we've implemented a wide range of features, including bayesian optimization and hyperband early stopping. Integration is simple: if you have a machine learning script running on the command line, you’re ready to go.
Step 1: Select Hyperparameters
First, you’ll want to select the hyperparameters you’re sweeping over. Set this up in a YAML file, as detailed further in the sweep docs.
wandb init # Initialize your project repowandb sweep sweep.yaml # returns your SWEEP_ID

Step 2: Launch Agents
Grab your sweep ID from the output of the command above and launch some agents to begin running your sweep.
wandb agent mcg70107
Sweep agents can run in any environment wandb is installed. If you have multiple GPUs on your machine run multiple agents with the CUDA environment variable.
CUDA_VISIBLE_DEVICES=0 wandb agent mcg70107CUDA_VISIBLE_DEVICES=1 wandb agent mcg70107
Step 3: Visualize Training
Running hyperparameter sweeps has opened up new possibilities in my research. Recently I’ve been using them as a tool to explore new datasets, for example ShapeNet for 3D semantic Segmentation.


These two papers inspired the approaches I took in my sweep:
Add a comment
Tags: W&B Features
Iterate on AI agents and models faster. Try Weights & Biases today.