Alexander Lerchner, a senior staff scientist at Google’s artificial intelligence laboratory DeepMind, has published a paper asserting that no computational system—including advanced AI—will ever become conscious. His conclusion directly opposes the public statements of AI executives, including DeepMind’s own CEO, Demis Hassabis, who has repeatedly emphasized the imminent arrival of artificial general intelligence (AGI).

Hassabis recently described AGI’s impact as “going to be something like 10 times the impact of the Industrial Revolution, but happening at 10 times the speed.” Lerchner’s paper, titled The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness, challenges the optimistic narratives promoted by AI companies, arguing that these claims collapse under rigorous scrutiny.

Expert Reactions: Praise and Criticism

While some philosophers and researchers praised Lerchner’s paper for bringing attention to the debate from within a major AI lab, others criticized its lack of engagement with existing scholarship. Johannes Jäger, an evolutionary systems biologist and philosopher, told Wired that Lerchner’s argument reinvents long-standing ideas without proper citation:

“I think he [Lerchner] arrived at this conclusion on his own and he's reinvented the wheel and he's not well read, especially in philosophical areas and definitely not in biology.”

The Core Argument: Mapmaker-Dependence

Lerchner’s paper introduces the concept of “mapmaker-dependence,” arguing that AI systems require an external cognitive agent—a human—to structure the world into usable data. He contends that AI cannot achieve consciousness because it lacks the foundational experiences of a living being. Jäger elaborated on this point:

“You have many other motivations as a human being. It's a bit more complicated than that, but all of those spring from the fact that you have to eat, breathe, and you have to constantly invest physical work just to stay alive, and no non-living system does that. An LLM doesn’t do that. It’s just a bunch of patterns on a hard drive. Then it gets prompted and it runs until the task is finished and then it’s done. So it doesn’t have any intrinsic meaning. Its meaning comes from the way that some human agent externally has defined a meaning.”

Lerchner’s argument centers on the “abstraction fallacy”—the mistaken belief that AI’s ability to manipulate language, symbols, and images in human-like ways implies it can achieve true consciousness. He asserts that consciousness requires a physical body and intrinsic motivations tied to survival, neither of which AI possesses.

Missing Context: A Decades-Old Debate

Critics note that Lerchner’s paper omits extensive prior research on consciousness, including biological and philosophical perspectives. Jäger acknowledged the paper’s strengths but emphasized that the core arguments have been explored for decades by other scholars. The debate over whether AI can ever be conscious remains unresolved, with Lerchner’s contribution adding a new voice to the conversation.

Source: 404 Media