Track Model Performance with Weights & Biases
This article explores how you can use Weights & Biases to visualize the performance of any model, and how to log metrics for a range of experiments.
Created on February 17|Last edited on November 3
Comment
In this report, I'll show you show you can visualize any model's performance with Weights & Biases.
We'll see how to log metrics from vanilla for loops, boosting models (xgboost & lightgbm), sklearn, and neural networks.
Table of Contents
Log a Metric
1. Log any metric with Weights and Biases
- wandb.init() – Initialize a new W&B run. Each run is single execution of the training script.
- wandb.log() – Logs custom objects like images, videos, audio files, HTML, plots, point clouds etc.
- %%wandb – Add this at the top of a cell to show model metrics live below the cell
Example
# Get Apple stock price data from https://www.macrotrends.net/stocks/charts/AAPL/apple/stock-price-historyimport pandas as pdimport wandb# Read in datasetapple = pd.read_csv("../input/kernel-files/apple.csv")apple = apple[-1000:]wandb.init(project="visualize-models", name="a_metric")# Log the metricfor price in apple['close']:wandb.log({"Stock Price": price})
Run set
1
XGBoost and LightGBM
2. Visualize boosting model performance
Start out by importing the experiment tracking library and setting up your free W&B account:
- import wandb – Import the wandb library
- callbacks=[wandb.lightgbm.wandb_callback()] – Add the wandb LightGBM callback
Example
# lightgbm callbacklgb.train(params, X_train, callbacks=[wandb.lightgbm.wandb_callback()])# xgboost callbackxgb.train(param, xg_train, num_round, watchlist, callbacks=[wandb.xgboost.wandb_callback()])
Run set
1
Sklearn
3. Visualize scikit learn performance
Logging sklearn plots with Weights & Biases is simple.
Step 1: First import wandb and initialize a new run.
important wandbwandb.init(project="visualize-sklearn")# load and preprocess dataset# train a model
Step 2: Visualize individual plots.
# Visualize single plotwandb.sklearn.plot_confusion_matrix(y_true,y_probas,labels)
Or visualize all plots at once:
# Visualize all classifier plotswandb.sklearn.plot_classifier(clf,X_train,X_testy_train,y_test,y_pred,y_probas,labels,model_name='SVC',feature_names=None)# All regression plotswandb.sklearn.plot_regressor(reg,X_train,X_test,y_train,y_test, model_name='Ridge')# All clustering plotswandb.sklearn.plot_regressor(reg,X_train,X_test,y_train,y_test, model_name='KMeans')
Run set
1
Neural Network
4. Visualize Neural Network Performance
Start out by installing the experiment tracking library and setting up your free W&B account:
- import wandb – Import the wandb library
- from wandb.keras import WandbCallback – Import the wandb [keras callback](https://docs.wandb.com/library/frameworks/keras)
- wandb.init() – Initialize a new W&B run. Each run is single execution of the training script.
- wandb.config – Save all your hyperparameters in a config object. This lets you use W&B app to sort and compare your runs by hyperparameter values.
- callbacks=[WandbCallback()] – Fetch all layer dimensions, model parameters and log them automatically to your W&B dashboard.
Example
# Add WandbCallback() to the fit functionmodel.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=config.epochs,callbacks=[WandbCallback(data_type="image", labels=labels)])
Using PyTorch?
5. Visualize A Hyperparameter Sweep
Running a hyperparameter sweep with Weights & Biases is very easy. There are just 3 simple steps:
- Define the sweep: we do this by creating a dictionary or a YAML file that specifies the parameters to search through, the search strategy, the optimization metric et all.
- Initialize the sweep: with one line of code we initialize the sweep and pass in the dictionary of sweep configurations: sweep_id = wandb.sweep(sweep_config)
- Run the sweep agent: also accomplished with one line of code, we call wandb.agent() and pass the sweep_id to run, along with a function that defines your model architecture and trains it: wandb.agent(sweep_id, function=train)
Run set
8
Run set
8
Add a comment
Iterate on AI agents and models faster. Try Weights & Biases today.