In this tutorial, we'll show you how to save and plot the history of the performance of a Keras model over time, using Weights & Biases.
By default Keras'
model.fit() returns a History callback object. This object keeps track of the accuracy, loss, and other training metrics, for each epoch, in the memory.
You can access the data in the history object like so –
hist = model.fit(X_train, y_train, batch_size=batch_size, nb_epoch=nb_epoch, validation_data=(X_test, y_test))
So, what's in history? We can access the metrics collected in the history object by accessing its keys.
print(hist.history.keys()) # Output: dict_keys(['loss', 'acc', 'val_loss', 'val_acc', 'lr'])
The dictionary values would be hard to parse as text, so next up let's visualize the data collected by the history object.
Normally, we'd do this in a convoluted way by making custom matplotlib plots. There are a couple of reasons this is untenable –
This process quickly gets unwieldy if we're training more than a few models, or collaborating with other people.
Weights & Biases makes it super easy to visualize history automatically. You can simply add a
WandbCallback to your
model.fit() to automatically save all the metrics and the loss values tracked. Check out the docs for more info.
import wandb from wandb.keras import WandbCallback # 1. Start a new run wandb.init(project="gpt-3") # ... Define a model # 2. Log layer dimensions and metrics over time model.fit(X_train, y_train, validation_data=(X_test, y_test), callbacks=[WandbCallback()])
In the plots below, you can see all the metrics collected by the history object visualized by Weights & Biases with just a couple lines of code.
Weights & Biases helps you keep track of your machine learning experiments. Use our tool to log hyperparameters and output metrics from your runs, then visualize and compare results and quickly share findings with your colleagues.
Get started in 5 minutes.