AI heavyweights call for end to ‘superintelligence’ research

Flavio Coelho / Getty Images

I have worked in AI for more than three decades, including with pioneers such as John McCarthy, who coined the term “artificial intelligence” in 1955.

In the past few years, scientific breakthroughs have produced AI tools that promise unprecedented advances in medicine, science, business and education.

At the same time, leading AI companies have the stated goal to create superintelligence: not merely smarter tools, but AI systems that significantly outperform all humans on essentially all cognitive tasks.

Superintelligence isn’t just hype. It’s a strategic goal determined by a privileged few, and backed by hundreds of billions of dollars in investment, business incentives, frontier AI technology, and some of the world’s best researchers.

What was once science fiction has become a concrete engineering goal for the coming decade. In response, I and hundreds of other scientists, global leaders and public figures have put our names to a public statement calling for superintelligence research to stop.

What the statement says

The new statement, released today by the AI safety nonprofit Future of Life Institute, is not a call for a temporary pause, as we saw in 2023. It is a short, unequivocal call for a global ban:

We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.

The list of signatories represents a remarkably broad coalition, bridging divides that few other issues can. The “godfathers” of modern AI are present, such as Yoshua Bengio and Geoff Hinton. So are leading safety researchers such as UC Berkeley’s Stuart Russell.

But the concern has broken free of academic circles. The list includes tech and business leaders such as Apple cofounder Steve Wozniak and Virgin’s Richard Branson. It includes high-level political and military figures from both sides of US politics, such as former National Security Advisor Susan Rice and former chairman of the Joint Chiefs of Staff Mike Mullen. It also includes prominent media figures such as Glenn Beck and former Trump strategist Steve Bannon, together with artists such as Will.I.am and respected historians such as Yuval Noah Harari.

Why superintelligence poses a unique challenge

Human intelligence has reshaped the planet in profound ways. We have rerouted rivers to generate electricity and irrigate farmland, transforming entire ecosystems. We have webbed the globe with financial markets, supply chains, air traffic systems: enormous feats of coordination that depend on our ability to reason, predict, plan, innovate and build technology.

Superintelligence could extend this trajectory, but with a crucial difference. People will no longer be in control.

The danger is not so much a machine that wants to destroy us, but one that pursues its goals with superhuman competence and indifference to our needs.

Imagine a superintelligent agent tasked with ending climate change. It might logically decide to eliminate the species that’s producing greenhouse gases.

Instruct it to maximise human happiness, and it might find a way to trap every human brain in a perpetual dopamine loop. Or, in Swedish philosopher Nick Bostrom’s famous example, a superintelligence tasked with producing as many paperclips as possible might try to convert all of Earth’s matter, including us, into raw material for its factories.

The issue is not malice but mismatch: a system that understands its instructions too literally, with the power to act cleverly and swiftly.

History shows what can go wrong when our systems grow beyond our capacity to predict, contain or control them.

The 2008 financial crisis began with financial instruments so intricate that even their creators could not foresee how they would interact until the entire system collapsed. Cane toads introduced in Australia to fight pests have instead devastated native species. The COVID pandemic exposed how global travel networks can turn local outbreaks into worldwide crises.

Now we stand on the verge of creating something far more complex: a mind that can rewrite its own code, redesign and achieve its goals, and out-think every human combined.

A history of inadequate governance

For years, efforts to manage AI have focused on risks such as algorithmic bias, data privacy, and the impact of automation on jobs.

These are important issues. But they fail to address the systemic risks of creating superintelligent autonomous agents. The focus has been on applications, not the ultimate stated goal of AI companies to create superintelligence.

The new statement on superintelligence aims to start a global conversation not just on specific AI tools, but on the very destination AI developers are steering us toward.

The goal of AI should be about creating powerful tools to serve humanity. This does not mean autonomous superintelligent agents that can operate beyond human control without aligning with human well-being.

We can have a future of AI-powered medical breakthroughs, scientific discovery, and personalised education. None of these require us to build an uncontrollable superintelligence that could unilaterally decide the fate of humanity.

The Conversation

Mary-Anne Williams does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Scroll to Top