The Godfather of AI Who Says Everyone Is Wrong

Turing Award winner Yann LeCun just raised Europe's largest-ever seed round—$1.03 billion—to prove the entire AI industry got the fundamental approach wrong.

"The idea that you're going to extend the capabilities of LLMs to the point that they're going to have human-level intelligence is complete nonsense." — Yann LeCun, WIRED interview, 2026

AMI Labs (Advanced Machine Intelligence) launched in March 2026 with a pre-money valuation of $3.5 billion. The Paris-based startup is building what LeCun calls "world models"—AI systems that learn from physical reality through sensors and cameras, not from predicting the next word.


What Are World Models?

LLMs like GPT-4 and Claude work by predicting tokens. They're trained on text. They hallucinate because they never learned how the physical world actually works—they just learned statistical patterns in language.

World models are fundamentally different:

Approach Training Data Learning Method Key Limitation
LLMs Text corpora Next-token prediction No physical grounding, hallucinations
World Models Sensor data (video, LiDAR, audio) Representation-space prediction Requires massive multimodal data

JEPA (Joint Embedding Predictive Architecture) is LeCun's framework. Instead of generating pixels or words, JEPA learns abstract representations and predicts future states in that representation space—not in the raw input space.


The Technical Architecture

AMI is building systems with four core capabilities:

  1. World Understanding — Learn from continuous, high-dimensional sensor data (cameras, LiDAR, audio)
  2. Persistent Memory — Unlike LLMs that reset context, world models maintain state
  3. Reasoning & Planning — Predict consequences of actions, plan action sequences
  4. Controllability & Safety — Built-in guardrails for industrial applications

The architecture is modular:

  • Perception Module — Processes raw sensor data into representations
  • World Model — Predicts how representations evolve given actions
  • Cost Module — Evaluates predicted states against goals
  • Actor/Planner — Optimizes action sequences to minimize cost

Why LeCun Left Meta

LeCun founded FAIR (Facebook AI Research) in 2013. He spent 12 years pushing world model research inside Meta. But as JEPA became more sophisticated:

"There was a reorientation of Meta's strategy where it had to basically catch up with the industry on LLMs and kind of do the same thing that other LLM companies are doing, which is not my interest." — LeCun, WIRED

Meta pivoted to LLMs. LeCun left in November 2025 to commercialize world models outside the social media giant.


The Funding Details

Metric Value
Seed Round $1.03 billion (890M EUR)
Pre-money Valuation $3.5 billion
Location Paris (HQ), Montreal, Singapore, New York
Lead Investors Cathay Innovation, Greycroft, Hiro Capital, HV Capital, Bezos Expeditions
Notable Backers Mark Cuban, Eric Schmidt, Xavier Niel, NVIDIA

This is Europe's largest seed round ever. The investor syndicate spans US tech billionaires, European venture capital, and strategic corporate backers.


The Team

  • Yann LeCun — Chairman, Turing Award winner, NYU professor
  • Alex LeBrun — CEO, founder of Nabla (healthcare AI), former Meta engineer
  • Saining Xie — Chief Science Officer, former Google DeepMind researcher
  • Michael Rabbat — Co-founder, former Meta research science director
  • Laurent Solly — COO, former Meta VP Europe
  • Pascale Fung — Chief Research & Innovation Officer, former Meta senior director

Target Applications

AMI is targeting industries where reliability matters:

  • Manufacturing — Industrial process control, optimization
  • Aerospace — Aircraft engine modeling, efficiency optimization
  • Biomedical — Healthcare AI with safety guarantees
  • Robotics — Embodied AI that understands physical consequences
  • Wearable Devices — On-device intelligence

The Nabla partnership (Alex LeBrun's healthcare company) is the first disclosed customer.


Community Sentiment

From Hacker News:

"AMI Labs just secured a billion in funding, and while that's a lot of money, it's literally just a fraction of the yearly salary they'd need to compete with OpenAI's talent pool."

From Reddit r/singularity:

"This is fundamental research that could actually lead somewhere. LLMs are impressive but they're basically autocomplete with a PhD. World models could give AI actual understanding."

From Reddit r/MachineLearning:

"Autoregressive models are unideal due to the fact that they only have one shot to get the answer right. Imagine an end of sequence token arriving—you can't go back and fix mistakes."

Counterargument from LessWrong:

"If the dataset GPT was trained on contained a lot of examples of GPT making mistakes and then being corrected, it would be able to stay coherent for a long time. This dataset is unlikely to ever exist, given that its size would need to be many times bigger than the entire internet."


The Verdict

LeCun's bet is contrarian but not crazy. His critique of LLMs has technical merit:

  • LLMs cannot plan—they're reactive
  • LLMs hallucinate because they lack world grounding
  • LLMs have no persistent memory
  • LLMs scale but don't understand

The question is whether JEPA can deliver practical results faster than LLMs continue improving. LeCun estimates 3-5 years to see if world models become viable for real-world deployment.

This is not just a funding story. It's a fundamental challenge to the dominant AI paradigm. The Turing Award winner who invented CNNs is betting his reputation—and $1 billion—that the industry's obsession with LLMs is a detour, not the destination.


Links: