The MLPerf Training v2.0 Results Are In
The results for the MLPref Training v2.0 benchmarking round have arrived, with over 250 performance results reported by 21 different organizations.
Created on June 30|Last edited on July 1
Comment
Early last April we got the results for inference benchmarking with MLPerf v2.0, and now it's time to take a look at the training benchmarking round. Over 250 performance results were contributed by 21 different organizations this time around, including 6 newcomers to the lineup.
A new benchmark was also added in this version of MLPerf: A benchmark for object detection which involves training RetinaNet on the Open Images dataset.
If you don't know what MLPerf is, it's essentially a big experiment held by ML Commons every now and then to benchmark and show the progress of advancements in the tools used for machine learning. Organizations partner with the project by running specific standardized benchmarking processes on their own ML solutions (hardware, software, etc) and their results are compiled and compared against each other to determine who excels at what.
Year-over-year comparisons also show the advances made over time.
The benchmarks for this year were based on:

With the top results coming from NVIDIA and Google.
That said, thanks to the increased participation, we can understand a wider scope of progress for the hardware and software solutions that go into machine learning. This time around, performance increases of up to 1.8x that of previous benchmarking tests were found, clearly showing that the advances in the infrastructure behind machine learning are creating substantial progress.
Find out more
Add a comment
Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.