Skip to main content

NVIDIA CEO Makes A Bold Prediction

In their quarterly earnings call, NVIDIA's CEO, Jensen Huang, predicts another million-times acceleration in AI development over the course of the next decade.
Created on February 27|Last edited on February 28
Jensen Huang, the CEO and co-founder of NVIDIA, has an optimistic view of the future of AI development and voiced a few predictions over a recent earnings call. In the call, Huang asserted that AI processing performance had increased by a factor of 10 over the last 10 years. Huang compared this performance increase compared to Moores Law and stated that Moores Law would optimistically produce a 100x performance increase in its best days.
However, the performance increase for AI over the last decade has been closer to a million times speedup. Despite the gradual slowing of Moores Law, Huang said that he expects another 1 million times speedup in the coming decade.

The Bottleneck

This performance increase, of course, is reliant on new ai research, software frameworks, and hardware interconnects to yield such a large speedup in development time and is not strictly reliant on GPU performance.
To give Huang credit, it does seem like our current rate of innovation could continue over the next 10 years. With any technology development cycle, there is usually a bottleneck that prevents the completion of a desired goal despite nearly all other sub-problems being solved.
Currently, it seems most likely the bottleneck will not be hardware infrastructure or software infrastructure but rather more so on the side of AI research. Meta AI recently published work related to their new open-source large language model (LLM), called llama, which was able to obtain performance comparable with GPT-3, using only a fraction of the parameters.
What this means is that even if we do see these infrastructure improvements that Huang is referring to, we still may be limited by the underlying theoretical technologies that allow these models to perform in the first place and that an increase in compute power may be a small fraction of the overall solution in achieving AGI.
However, it’s important that we continue to invest heavily in AI infrastructure, as it will increase research velocity and also prepare us for the rapid usability of future AI research breakthroughs. Despite the rise of Chat-GPT, the underlying technology has existed for over 5 years.
By accelerating developer velocity and improving the availability of higher amounts of compute, we will likely be able to implement future breakthroughs in a much smaller amount of time.
Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.