Plotting Keras History Using Weights & Biases
In this article, we take a look at how to save and plot the training history of a Keras model using Weights & Biases.
Created on June 26|Last edited on October 26
Comment
In this article, we'll show you how to save and plot the history of the performance of a Keras model over time, using Weights & Biases.
By default Keras' model.fit() returns a History callback object. This object keeps track of the accuracy, loss, and other training metrics, for each epoch, in the memory.
Accessing the history
You can access the data in the history object like so –
hist = model.fit(X_train, y_train,batch_size=batch_size,nb_epoch=nb_epoch,validation_data=(X_test, y_test))
So, what's in history? We can access the metrics collected in the history object by accessing its keys.
print(hist.history.keys())# Output: dict_keys(['loss', 'acc', 'val_loss', 'val_acc', 'lr'])
Visualizing History
The dictionary values would be hard to parse as text, so next up let's visualize the data collected by the history object.
Normally, we'd do this in a convoluted way by making custom matplotlib plots. There are a couple of reasons this is untenable –
- We'd need to write and maintain code for plots for each of the metrics we care about, in every project. Additionally, we'd need to write code for any customizations like line colors, line types, apply smoothing, etc.
- For experiments involving multiple models, we'd need to handle saving them as images and matching them to the model configuration later.
- If we're working with collaborators we'd have to pass these images and model configurations around in email or messaging apps.
This process quickly gets unwieldy if we're training more than a few models, or collaborating with other people.
Weights & Biases makes it super easy to visualize history automatically. You can simply add a WandbCallback to your model.fit() to automatically save all the metrics and the loss values tracked. Check out the docs for more info.
import wandbfrom wandb.keras import WandbCallback# 1. Start a new runwandb.init(project="gpt-3")# ... Define a model# 2. Log layer dimensions and metrics over timemodel.fit(X_train, y_train, validation_data=(X_test, y_test),callbacks=[WandbCallback()])
In the plots below, you can see all the metrics collected by the history object visualized by Weights & Biases with just a couple lines of code:
Run set
27
Weights & Biases
Weights & Biases helps you keep track of your machine learning experiments. Use our tool to log hyperparameters and output metrics from your runs, then visualize and compare results and quickly share findings with your colleagues.
Add a comment
Iterate on AI agents and models faster. Try Weights & Biases today.