0:00
/
0:00
Transcript

Is the Path to Real AI Being Ignored?

Yann LeCun’s departure from Meta and the forgotten lesson of how humans truly learn


“If we are entrusting machines with our decisions, our diagnostics, even our empathy, shouldn’t we first teach them to learn?”


When one of AI’s founding figures walks away

Yann LeCun — the Turing Award–winning scientist credited with inventing convolutional neural networks (CNNs) in the 1980s — plans to leave Meta after more than a decade to start his own AI venture. His new company will focus on world-model AI: systems that learn by observing and predicting the physical world, rather than simply mimicking language.

The man who helped machines see now wants to help them think.


The limits of language-only learning

LeCun has been clear: large language models, however impressive, are “nowhere near the intelligence of a house cat.” While they excel at producing coherent text, they lack understanding of causality, perception, and action. Current AI models learn correlations between words — not consequences in the real world.

For LeCun, true intelligence requires grounding in experience, interaction, and prediction. A system must understand what happens when it acts, how objects behave, and how others respond — the same way a child builds intuition by exploring, imitating, and adjusting.

World models learn like a child playing with blocks: drop one, it falls. Stack them wrong, they topple. The system builds physical intuition — gravity, balance, consequence — without being told the rules.


Human mirror: how we really learn

Humans are not born with libraries of text. We learn by watching, listening, and mimicking others — processes underpinned by mirror neurons. Mirror neuron systems, thought to underpin observational learning, fire both when we perform an action and when we observe someone else performing it, helping us internalize movements, emotions, and intentions.

A child learns language, empathy, and social intelligence by observing caregivers, trying, failing, and adapting. This is how humans survive and thrive.

If AI is to coexist with us meaningfully — as assistants, partners, or clinical co-processors — shouldn’t it learn in the same way?


Why teach machines like humans?

If AI will interact continuously with humans in the future, why not train it this way? Trust, empathy, and reliability emerge from shared modes of learning. A machine trained through perception and interaction could align its assistance more closely with human understanding.

Delegating functions — in health, work, and daily life — to AI systems that cannot learn like humans creates risk. Machines that mimic text may misinterpret context, intent, or emotion — critical in healthcare and neuromodulation. Training AI in a human-like manner is about functional alignment, not anthropomorphism.


Why Meta let him go

LeCun’s exit also highlights structural tension. Inside Meta, his FAIR lab once led foundational AI research, but corporate shifts now prioritize large-scale LLM production under Meta Superintelligence Labs, led by Alexandr Wang, CEO of Scale AI, following Meta’s $14.3 billion investment in the company.

From a NeuroEdge Nexus perspective, this reflects a larger question: should we prioritize short-term deployment or long-term intelligence infrastructure? In domains like neuroscience and clinical neurotechnology, the answer is clear: safe, reliable systems require grounded, embodied learning.


Implications for brain-health AI

In neuromodulation, brain-computer interfaces (BCIs), and teleneurophysiology, the brain itself is the ultimate world-model learner — continuously integrating sensory feedback, internal prediction, and adaptive control. AI for these applications must model real interactions: neural dynamics, patient behavior, and therapeutic feedback.

An AI monitoring seizure onset patterns requires embodied temporal understanding — not merely pattern matching in EEG data, but anticipating cascade dynamics in real neural tissue.

World-model architectures provide a framework for embodied intelligence, bridging computational learning with neurophysiological grounding. This mirrors the oldest lesson in neuroscience: intelligence is dynamic and embodied, not static or purely textual.


A turning point for AI — and for us

LeCun’s venture marks a philosophical realignment for the field. Scaling up text-based models alone will not yield understanding. Instead, the future lies in AI that learns through experience and observation, mirroring human development.

For those building neurotechnology: LeCun’s departure is a signal. The companies winning the next decade won’t be those with the largest models — they’ll be those whose AI understands what it means for a patient’s hand to move, for a seizure to begin, for a therapy to work. That’s embodied intelligence. That’s what matters.

Key question for leaders, clinicians, and innovators:

If we are entrusting machines with our decisions, our diagnostics, even our empathy, shouldn’t we first teach them to learn?


NeuroEdge Nexus examines the intersection of neuroscience, technology, and healthcare systems—identifying not just what is possible, but what is required for meaningful clinical translation.

Discussion about this video

User's avatar