A senior staff scientist at Google’s artificial intelligence laboratory DeepMind, Alexander Lerchner, argues in a new paper that no AI or other computational system will ever become conscious. That conclusion appears to conflict with the narrative from AI company CEOs, including DeepMind’s own Demis Hassabis, who repeatedly talks about the advent of artificial general intelligence. Hassabis recently claimed AGI is “going to be something like 10 times the impact of the Industrial Revolution, but happening at 10 times the speed.”
The paper shows the divergence between the self-serving narratives AI companies promote in the media and how they collapse under rigorous examination. Other philosophers and researchers of consciousness I talked to said Lerchner’s paper, titled “The Abstraction Fallacy: Why AI Can Simulate But Not Instantiate Consciousness,” is strong and that they’re glad to see the argument come from one of the big AI companies, but that other experts in the field have been making the exact same arguments for decades.
“I think he [Lerchner] arrived at this conclusion on his own and he’s reinvented the wheel and he’s not well read, especially in philosophical areas and definitely not in biology,” Johannes Jäger, an evolutionary systems biologist and philosopher, told me.
Lerchner’s paper is complicated and filled with jargon, but the argument broadly boils down to the point that any AI system is ultimately “mapmaker-dependent,” meaning it “requires an active, experiencing cognitive agent”—a human—to “alphabetize continuous physics into a finite set of meaningful states.” In other words, it needs a person to first organize the world in way that is useful to the AI system, like, for example, the way armies of low paid workers in Africa label images in order to create training data for AI.
The so-called “abstraction fallacy” is the mistaken belief that because we’ve organized data in such a way that allows AI to manipulate language, symbols, and images in a way that mimics sentient behavior, that it could actually achieve consciousness. But, as Lerchner argues, this would be impossible without a physical body.
“You have many other motivations as a human being. It’s a bit more complicated than that, but all of those spring from the fact that you have to eat, breathe, and you have to constantly invest physical work just to stay alive, and no non-living system does that,” Jäger told me. “An LLM doesn’t do that. It’s just a bunch of patterns on a hard drive. Then it gets prompted and it runs until the task is finished and then it’s done. So it doesn’t have any intrinsic meaning. Its meaning comes from the way that some human agent externally has defined a meaning.”
One could imagine an embodied AI programmed with human-like physical needs, and Jäger talked about why a system like that couldn’t achieve consciousness as well, but that’s beyond the scope of this article. There are mountains of literature and decades of research that have gone into these questions, and almost none of it is cited in Lerchner’s paper.
“I’m in sympathy with 99 percent of everything that he [Lerchner] says,” Mark Bishop, a professor of cognitive computing at Goldsmiths, University of London, told me. “My only point of contention is that all these arguments have been presented years and years ago.”
Both Bishop and Jäger said that it was good, but odd, that Google allowed Lerchner to publish the paper. Both said the argument Lerchner makes, and that they agree with, is not an obscure philosophical point irrelevant to the average user, but that the claim that AI can’t achieve consciousness means that there’s a hard cap on what AI could accomplish practically and commercially. For example, Jäger and Bishop said AGI, and the impact 10 times the Industrial Revolution that DeepMind CEO Hassabis predicts, is not likely according to this perspective.
“[Elon] Musk himself has argued that to get level five autonomy [in self-driving cars] you need generalized autonomy” which is Musk’s term for AGI, Bishop said.
Lerchner’s paper argues that AGI without sentience is possible, saying that “the development of highly capable Artificial General Intelligence (AGI) does not inherently lead to the creation of a novel moral patient, but rather to the refinement of a highly sophisticated, non-sentient tool.” DeepMind is also actively operating as if AGI is coming. As I reported last year, for example, it was hiring for a “post-AGI” research scientist.
Lerchner’s paper includes a disclaimer at the bottom that says “The theoretical framework and proofs detailed herein represent the author’s own research and conclusions. They do not necessarily reflect the official stance, views, or strategic policies of his employer.” The paper was originally published on March 10 and is still featured on Google DeepMind’s site. The PDF of the paper itself, hosted on philpapers.org, originally included Google DeepMind letterhead, but appears to have been replaced with a new PDF that removes Google’s branding from the paper, and moved the same disclaimer to the top of the paper, after I reached out for comment on April 20. Google did not respond to that request for comment.
“We can imagine many financial and legislative reasons why Google would be sanguine with a conclusion that says computations can’t be consciousness,” Bishop told me. “Because if the converse was true, and bizarrely enough here in Europe, we had some nutters who tried to get legislation through the European Parliament to give computational systems rights just a few years ago, which seems to be just utterly stupid. But you can imagine that Google will be quite happy for people to not think their systems are conscious. That means they might be less subject to legislation either in the US or anywhere in the world.”
Jäger said that he’s happy to see a Google DeepMind scientist publish this research, but said that AI companies could learn a lot by talking to the researchers and educating themselves with the work Lerchner failed to cite in his paper, or simply didn’t know existed.
“The AI research community is extremely insular in a lot of ways,” Jager said. “For example, none of these guys know anything about the biological origins of words like ‘agency’ and ‘intelligence’ that they use all the time. They have absolutely frighteningly no clue. And I’m talking about Jeffrey Hinton and top people, Turing Prize winners and Nobel Prize winners that are absolutely marvelously clueless about both the conceptual history of these terms, where they came from in their own history of AI, and that they’re used in a very weird way right now. And I’m always very surprised that there is so little interest. I guess it’s just a high pressure environment and they go ahead developing things they don’t have time to read.”
Emily Bender, a Professor of Linguistics at the University of Washington and co-author of The AI Con: How to Fight Big Tech’s Hype and Create the Future We Want, told me that Lerchner might have been told that he’s replicating old work, or that he should at least cite it, if he had gone through a normal peer-review process.
“Much of what’s happening in this research space right now is you get these paper-shaped objects coming out of the corporate labs,” but that go through a proper scientific paper publishing process.
Bender also told me that the field of computer science and humanity more broadly “if computer science could understand itself as one discipline among peers instead of the way that it sees itself, especially in these AGI labs, as the pinnacle of human achievement, and everybody else is just domain experts […] it would be a better world if we didn’t have that setup.”


