What does Yann LeCun think about AGI? A summary of his talk, "Mathematical Obstacles on the Way to Human-Level AI"

post by Adam Jones (domdomegg) · 2025-04-05T12:21:25.024Z · LW · GW · 0 comments

Contents

  Why Yann LeCun thinks LLMs will not scale to AGI
  How Yann LeCun thinks we should build AGI
None
No comments

This is a summary of Yann LeCun's talk "Mathematical Obstacles on the Way to Human-Level AI". I've tried to make it more accessible to people who are familiar with basic AI concepts, but not the level of maths Yann presents. You can watch the original talk on YouTube.

I disagree with Yann, but I have tried to represent Yann's arguments as faithfully as possible. I think understanding people who differ in opinion to you is incredibly important for thinking properly about things.

In an appendix on my blog I include Gemini 2.5 Pro's analysis of my summary. In short:

The summary correctly identifies the core arguments, uses LeCun's terminology [...], and reflects the overall tone and conclusions of the talk

Why Yann LeCun thinks LLMs will not scale to AGI

LLMs use deep learning for base and fine-tuning, which is sample inefficient (need to see many examples before learning things). Humans and animals learn from way fewer samples.

LeCun's slide

LLMs are primarily trained on text, which doesn't carry as much raw data as other formats. To get AGI we need to train models on sensory inputs (e.g. videos). Humans see more data when you measure it in bits.

LeCun's slide

The setup for LLMs has them predict the next token. But this means they are predicting in a space with exponentially many options, of which only one is correct. This means they are almost always incorrect. And similarly for images/videos, they have so many options and the world is only partially predictable, that it's not feasible for the model to be correct.

My visualisation

LeCun's slides

AI systems work the same amount of time on short problems and hard problems. But actually they should work longer on hard problems.

LeCun's slide

How Yann LeCun thinks we should build AGI

We need to model it after how humans think

Needs to be safe and controllable by design

The specific architecture he proposes is:

AI should be released open-source, and be used as an amplifier of human intelligence.

0 comments

Comments sorted by top scores.