MLPerf Results Published For November 9th Benchmarking Rounds
The time for MLPerf results has come again, this time consisting of two training benchmarks and one inference benchmark - showing a claimed performance increase of 5X.
Created on November 9|Last edited on November 9
Comment
The MLPerf benchmarking rounds put the latest hardware to the test on a variety of both training and inference benchmarks. Today, new results for a few benchmark categories were released: MLPerf Training v2.1, MLPerf Training: HPC v2.0, and MLPerf Inference: Tiny. Along with the results, like always, came a press release claiming general performance increases of up to 5X.
MLPerf November 9th rounds
The three rounds were as follows:
MLPerf Training v2.1
Being the most broadly scoped of the three results today, the MLPerfs Training v2.1 round had nearly 200 total submissions from 18 contributors. Submitters were made to train ML models which hit certain quality thresholds on a variety of ML tasks, recording how long it took to train the models. Because training times can vary greatly, models are trained several times and averaged to get the final result.
This round's ML tasks were identical to those of MLPerf Training v2.0:

MLPerf Training: HPC v2.0
This benchmark tests high performance computing platforms meant for seriously heavy workloads, like for certain scientific endeavors, or cases where many models are trained at once. Five companies made up this round, being tasked to track how long it takes to train a model to a sufficient level of quality on their hardware like the above round, but also benchmark for training multiple models at once in a similar fashion.
The three tasks this time around were the same as MLPerf Training: HPC v1.0:

MLPerf Inference: Tiny v1.0
Unlike the other two, this round tests inference on small-scale hardware scenarios, such as the processors onboard small internet-of-things devices. Here, the hardware is tested not just for speed, but also energy efficiency.
The tasks for this round are also the same as they have been in previous versions:

Find out more
Add a comment
Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.