None of the frontier AI research labs have presented any evidence that they are on the brink of achieving artificial general intelligence, no matter how they define that goal, but Google is already planning for a “Post-AGI” world by hiring a scientist for its DeepMind AI lab to research the “profound impact” that technology will have on society.
“Spearhead research projects exploring the influence of AGI on domains such as economics, law, health/wellbeing, AGI to ASI [artificial superintelligence], machine consciousness, and education,” Google says in the first item on a list of key responsibilities for the job. Artificial superintelligence refers to a hypothetical form of AI that is smarter than the smartest human in all domains. This is self explanatory, but just to be clear, when Google refers to “machine consciousness” it’s referring to the science fiction idea of a sentient machine.
OpenAI CEO Sam Altman, DeepMind CEO Demis Hassabis, Elon Musk, and other major and minor players in the AI industry are all working on AGI and have previously talked about the likelihood of humanity achieving AGI, when that might happen, and what the consequences might be, but the Google job listing shows that companies are now taking concrete steps for what comes after, or are at least are continuing to signal that they believe it can be achieved.
Part of the problem is that AGI is a loosely defined term and goal. According to The Information, a 2023 document from OpenAI and Microsoft defined AGI as an AI system that can generate up to $100 billion in profit, which seems entirely removed from any scientific benchmark. Earlier this year, Altman wrote that OpenAI is confident it knows how to build AGI “as we have traditionally understood it” and that the company believes that in 2025 we’ll see the first AI agents “join the workforce.” Microsoft CEO Satya Nadella later downplayed this type of AGI definition, saying “Us self-claiming some AGI milestone, that’s just nonsensical benchmark hacking to me.”
As other critics have previously pointed out, AGI and the massive impact it could theoretically have on society is also a useful marketing strategy for AI companies, allowing them to hype up their value based on something that may or may not happen in the future, while distracting from the actual problems and harm their AI system are actively causing as they exist.
Google’s job listing appears to prepare the company for the most ambitious, science fiction-y interpretation of AGI. Other key responsibilities for the job include “research projects exploring the influence of AGI on domains such as economics, law, health/wellbeing, AGI to ASI, machine consciousness, and education,” conducting “in-depth studies to analyze AGI’s societal impacts across key domains,” and building “infrastructure and evaluation frameworks for a systematic evaluation of AI’s societal effects.”
The job listing comes shortly after Deepmind published a report in early April about “taking a responsible path to AGI.” The report, which states that “AI that’s at least as capable as humans at most cognitive tasks, could be here within the coming years,” details how Google is “taking a systematic and comprehensive approach to AGI safety, exploring four main risk areas: misuse, misalignment, accidents, and structural risks, with a deeper focus on misuse and misalignment.”
Google did not immediately respond to a request for comment.