Skip to main content

Track Model Performance with Weights & Biases

This article explores how you can use Weights & Biases to visualize the performance of any model, and how to log metrics for a range of experiments.
Created on February 17|Last edited on November 3
In this report, I'll show you show you can visualize any model's performance with Weights & Biases.
We'll see how to log metrics from vanilla for loops, boosting models (xgboost & lightgbm), sklearn, and neural networks.


Table of Contents



If you have any questions, we'd love to answer them.

Log a Metric

1. Log any metric with Weights and Biases

  • wandb.init() – Initialize a new W&B run. Each run is single execution of the training script.
  • wandb.log() – Logs custom objects like images, videos, audio files, HTML, plots, point clouds etc.
  • %%wandb – Add this at the top of a cell to show model metrics live below the cell

Example

# Get Apple stock price data from https://www.macrotrends.net/stocks/charts/AAPL/apple/stock-price-history
import pandas as pd
import wandb

# Read in dataset
apple = pd.read_csv("../input/kernel-files/apple.csv")
apple = apple[-1000:]
wandb.init(project="visualize-models", name="a_metric")

# Log the metric
for price in apple['close']:
wandb.log({"Stock Price": price})


Run set
1


XGBoost and LightGBM

2. Visualize boosting model performance

Start out by importing the experiment tracking library and setting up your free W&B account:
  • import wandb – Import the wandb library
  • callbacks=[wandb.xgboost.wandb_callback()] – Add the wandb XGBoost callback, or
  • callbacks=[wandb.lightgbm.wandb_callback()] – Add the wandb LightGBM callback

Example

# lightgbm callback
lgb.train(params, X_train, callbacks=[wandb.lightgbm.wandb_callback()])

# xgboost callback
xgb.train(param, xg_train, num_round, watchlist, callbacks=[wandb.xgboost.wandb_callback()])

Run set
1


Sklearn

3. Visualize scikit learn performance

Logging sklearn plots with Weights & Biases is simple.

Step 1: First import wandb and initialize a new run.

important wandb

wandb.init(project="visualize-sklearn")

# load and preprocess dataset

# train a model

Step 2: Visualize individual plots.

# Visualize single plot

wandb.sklearn.plot_confusion_matrix(y_true,y_probas,labels)

Or visualize all plots at once:

# Visualize all classifier plots

wandb.sklearn.plot_classifier(clf,X_train,X_testy_train,y_test,y_pred,y_probas,labels,model_name='SVC',feature_names=None)

# All regression plots

wandb.sklearn.plot_regressor(reg,X_train,X_test,y_train,y_test, model_name='Ridge')

# All clustering plots

wandb.sklearn.plot_regressor(reg,X_train,X_test,y_train,y_test, model_name='KMeans')


Run set
1


Neural Network

4. Visualize Neural Network Performance

Start out by installing the experiment tracking library and setting up your free W&B account:
  • import wandb – Import the wandb library
  • from wandb.keras import WandbCallback – Import the wandb [keras callback](https://docs.wandb.com/library/frameworks/keras)
  • wandb.init() – Initialize a new W&B run. Each run is single execution of the training script.
  • wandb.config – Save all your hyperparameters in a config object. This lets you use W&B app to sort and compare your runs by hyperparameter values.
  • callbacks=[WandbCallback()] – Fetch all layer dimensions, model parameters and log them automatically to your W&B dashboard.

Example



# Add WandbCallback() to the fit function
model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=config.epochs,
callbacks=[WandbCallback(data_type="image", labels=labels)])

Using PyTorch?

Here's how to use W&B to track your PyTorch model performance, gradients, and predictions.

5. Visualize A Hyperparameter Sweep

Running a hyperparameter sweep with Weights & Biases is very easy. There are just 3 simple steps:
  1. Define the sweep: we do this by creating a dictionary or a YAML file that specifies the parameters to search through, the search strategy, the optimization metric et all.
  2. Initialize the sweep: with one line of code we initialize the sweep and pass in the dictionary of sweep configurations: sweep_id = wandb.sweep(sweep_config)
  3. Run the sweep agent: also accomplished with one line of code, we call wandb.agent() and pass the sweep_id to run, along with a function that defines your model architecture and trains it: wandb.agent(sweep_id, function=train)

Run set
8



Run set
8

Iterate on AI agents and models faster. Try Weights & Biases today.