Skip to main content

OPT-IML: Meta Releases New Instruction-Tuned OPT Models + NLP Task Benchmark

Meta AI released a new set of open-source NLP models based on OPT, called OPT-IML. These models are fine-tuned on new instruction task benchmark OPT-IML Bench.
Created on December 22|Last edited on December 22
Today, Meta AI has announced and release OPT-IML, a new collection of NLP models based on the OPT models. They have been finetuned on a new instruction-tuning benchmark called OPT-IML Bench.


OPT-IML's Performance

OPT-IML (Open Pre-trained Transformer - Instruction Meta-Learning) is architecturally identical to the OPT models it is based on. The only difference is that it's fine-tuned on the newly created OPT-IML Bench benchmark. This new benchmark was created by combining several existing benchmarks to get a varied and large set of tasks to fine-tune on.

There are two model sizes created for OPT-IML: 30B and 175B. OPT-IML performs better on average across 14 standard NLP evaluation tasks compared to the base OPT models: 7%~ better for both model sizes on 0-shot tasks and 4%~ and 0.4%~ better on 32-shot tasks respectively.

OPT-ILM is open-source. You can grab weights for the 30B model right away, and request the 175B model through a request form when it's available, through it's GitHub repository. You can also read all the details about these models and the new benchmark in the full research paper here.

Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.