Weights & Biases helps your ML team unlock their productivity by optimizing, visualizing, collaborating on, and standardizing their model and data pipelines – regardless of framework, environment, or workflow. Used by the likes of OpenAI, Toyota and Github, W&B is part of the new standard of best practices for machine learning. By saving everything you need to track and compare models — architecture, hyperparameters, weights, model predictions, GPU usage, git commits, and even datasets – W&B makes your ML workflows reproducible.
Today we're announcing an integration with a tool our community adores – Ray/Tune is one of first and most respected libraries for scalable hyperparameter optimization. With just a few lines of code Ray/Tune helps researchers optimize their models with state-of-the-art algorithms and scale their hyperparameter optimization process to hundreds of nodes and GPUs.
We're especially excited about the possibilities this collaboration with our friends at Ray/Tune opens up. Both Weights and Biases and Ray/Tune are built for scale and handle millions of models every month for teams doing some of the most cutting-edge deep learning research.
Whereas W&B is a centralized repository for everything you need to track, reproduce and gain insights from your models easily; Ray/Tune provides a simple interface for scaling and running distributed experiments. A few reasons why our community likes Ray/Tune –
There are 2 ways you can use the wandb integration with Ray Tune.
tune.run(
train,
loggers=[WandbLogger],
config={
"wandb": {"project": "rayTune", "monitor_gym": True}},
})
WandbLogger automatically logs the metrics reported to the W&B dashboard of the project.
You can also use wandb_mixin function decorator when you need to log any custom metrics, charts and other visualizations
@wandb_mixin
def train(...):
...
wandb.log({...})
tune.report(metric = score)
...