Yann LeCun discusses the path to Human-Level AI
Created on October 18|Last edited on October 18
Comment
Yann LeCun, Chief AI Scientist at Meta and a professor at NYU, delivered a thought-provoking talk on the challenges and future direction of artificial intelligence at a recent AI research conference. LeCun, a Turing Award winner, focused on the limitations of current large language models and proposed a new framework for advancing towards human-level AI.
The Current Limits of AI
In his presentation, LeCun emphasized that despite the hype surrounding AI, systems like LLMs are still far from achieving human-like cognition. “Current AI can predict and generate text but lacks fundamental reasoning, planning, and intuition,” he noted. LeCun highlighted the gap between AI's impressive performance in language tasks and its inability to perform basic physical tasks like clearing a dinner table or driving a car autonomously.
LeCun explained how tasks humans find simple—such as walking or perceiving objects—are incredibly difficult for AI. In contrast, tasks requiring abstract thinking, like playing chess, are easier for machines to master. This paradox underlines the challenges of building machines that can navigate the physical world and reason effectively.
The Need for World Models and Hierarchical Planning
LeCun argued that to reach human-level AI, future systems must develop world models—internal representations of how the world works. These models would allow AI systems to plan actions, predict outcomes, and adapt to new situations, much like humans and animals. “A 10-year-old can figure out how to load a dishwasher in one attempt,” LeCun said. “We still don’t have AI that can do that.”
He emphasized the importance of hierarchical planning, where actions are planned at different levels of abstraction—similar to how a person plans a trip by breaking it into smaller tasks. “Humans don’t plan every muscle movement when going to Paris; they plan step-by-step, in higher-level chunks,” he explained. LeCun noted that building such hierarchical planning into AI remains an unsolved problem.
Why Generative Models Fail for Physical Intelligence
LeCun critiqued generative models, including LLMs, for their inability to develop common-sense reasoning. Predicting pixels in a video, for example, has proven ineffective for understanding the world at a conceptual level. “A baby can understand gravity by watching objects fall. But after ten years of trying, we still can’t train an AI to learn the same insight just by watching videos,” he said.
He proposed shifting from traditional generative models to Joint Embedding Predictive Architectures (JEPA). These models focus on predicting abstract representations of data rather than raw pixels or words. LeCun believes that JEPA could enable AI to better understand and reason about the world by learning from representations instead of raw sensory input.
The Road Ahead: A Call for New Approaches
LeCun also called for a departure from existing machine learning staples like reinforcement learning, probabilistic models, and contrastive methods. “We need to rethink our approach to building intelligent systems,” he urged, “because the current path won’t get us to human-level AI.” Instead, he advocated for energy-based models and optimization-based inference to guide future research.
Despite the challenges, LeCun remains optimistic about the future. He stressed that AI tools need to amplify human intelligence, making people more productive and creative, not replace them. “In the future, we may each have a personal collection of virtual assistants working for us—like a staff, but without real humans,” he suggested.
As the field advances, LeCun's message was clear: the journey toward human-level AI will require new thinking, new architectures, and a relentless focus on understanding the essence of intelligence.
The Talk:
Add a comment
Tags: ML News
Iterate on AI agents and models faster. Try Weights & Biases today.