Skip to main content

Plex: Google's Framework For Improved Model Reliability

Researchers at Google AI present a framework for model reliability, focusing on model adaptability and deliberate uncertainty. Two Plex models show off what the framework is capable of.
Created on July 15|Last edited on July 15
Google AI researchers have introduced a framework for improved deep learning model reliability, including the release of some new pre-trained models and 40 datasets to support their work.
There are three key pieces that define this framework: uncertainty, robust generalization, and adaptation. The researchers go on to say that "a reliable model should aim to do well in all of these areas simultaneously out-of-the-box, without requiring any customization for individual tasks."

The highlight of the framework is its focus on uncertainty and how models handle it. Though it might seem unintuitive, letting a model express uncertainty about its decisions shows that it understands when something is beyond its understanding. By showing uncertainty, we can know its decision might be unreliable.
Models should also be able to generalize themselves to an extent where minor, unforeseen tasks which it was not necessarily trained for are still handled, whether it's by responding with uncertainty or extrapolating what is does know to make an informed decision.
Additionally, adaptation to new datasets during training should be a quick and painless process, learning with as few labeled examples as possible.

How Plex illustrates the framework

Two Plex models were created to illustrate the effectiveness of the framework, ViT-Plex (for vision tasks) and T5-Plex (for language tasks). Plex only changes a couple of things about the architecture of a base model, meaning Plex is basically an extension that could install into any model architecture.
Compared to state-of-the-art models, the Plex models performed much better across many of the newly introduced datasets.

Find out more

Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.