
Half of entry-level white collar jobs might cease to exist in the near future, according to Dario Amodei, the CEO of leading AI company Anthropic. Amodei, whose company is behind the Claude platform, has since called for transparency standards requiring companies making AI models to demonstrate how they are handling risks such as the AI enabling cyberattacks or helping to make bioweapons.
Time and again, such claims suggest the pace of development in artificial intelligence is vastly outstripping our ability to adapt and adopt, creating a series of short-term crises.
Yet the debate between AI doomers, accelerationists, utopians and other factions is largely trapped in arguments about whether current AIs are truly demonstrating creativity, problem solving, planning and other intelligent characteristics. It’s as if we’re collectively in denial.
AI is arguably the most important technology humankind will ever invent. We owe it to ourselves, and future generations, to make conscious decisions about introducing AI into everything we do, ensuring that humanity benefits.
Get your news from actual experts, straight to your inbox. Sign up to our daily newsletter to receive all The Conversation UK’s latest coverage of news and research, from politics and business to the arts and sciences.
We know that AI is threatening the creative industries, for example. We can argue about whether AI is truly creative or we can set about preserving human creativity, originality and income security.
For instance, the new CREAATIF report from Queen Mary University of London lays out a series of recommendations, such as treating creatives as co-designers along with AIs, not victims. It calls for clear disclosures about AI-generated creative works, and ensuring creatives can opt out of having their work in AI training datasets.
We know that AI is being used in warfare. We can argue about what it means for a human to still take crucial battlefield decisions – the idea of “human in the loop”. Or we can set down explicit rules of war, as hinted at by the UN meeting in May on possible restrictions in the use of lethal autonomous systems.
We know that AI is being used in medicine, from screening blood tests to virtual hospitals – as created by Tsinghua University in China. We can argue about whether AI can ever replace doctors, or we can actively explore where it is most appropriate and desirable to supplement human healthcare expertise with AI.
Jobs and knowledge
We also strongly suspect that AI will displace human jobs more broadly. Besides Amodei’s warnings, certain companies are already adopting “AI first” strategies. These treat AIs as the core driver of company operations, not just support tools.
The canary in the coalmine may be graduate jobs, since companies will likely initially use AI for jobs requiring the least experience. Graduate hiring in the UK is falling. We can argue about whether there is a link with AI, or we can start putting serious thought into the future of education, skills and the meaning of a career in the 21st century.
Finally, we know that AI is being used to mediate human access to knowledge, whether it’s the recommendation engines in platforms like TikTok and X, or search engines like Google and Bing providing AI summaries in preference to linked websites.
Misinformation, disinformation and fakery is rife, often enabled by AI tools. And a more insidious side-effect of AI-mediated access to knowledge is the potential decline in how we know what’s true or reliable.
We can argue about whether this is happening or we can focus on protecting reliable sources of information, and making sure everyone can access them. For example, the US-based Coalition for Content Provenance and Authenticity (C2PA) develops standards to verify where digital media comes from and whether it has been tampered with.
What you can do
AI is not going away, and there will be positives as well as negatives. For instance, AI will undoubtedly help to solve the hard problems of global health, energy generation and climate change.
We need to recognise the power of existing AI technologies, and acknowledge that AI is likely to get even more advanced very quickly and that we need to act personally and collectively. And there are several things we can do now.
First, take a personal interest. AI literacy is fast becoming a life skill. Leading AI platforms like ChatGPT, Claude and Gemini can create, summarise or rewrite text for you, compile research reports, jazz up presentations, create music, do data analysis, come up with new cooking recipes – the options are endless.

Aileenchik
I’ve seen schoolteachers create AI mentors for students, pensioners create songs and presentations, children transform their artwork into historical contexts, all with no technical skills. Similarly, anyone can now use AI to code. So-called “vibe-coding” allows anyone to describe, in words, what they want a piece of software to do, and the AI will create a version of it – to an increasingly good level of completeness.
The ability to adapt and adopt is key. Knowing and practising how to use AI will not only position you for future opportunities and changes, but may allow you to steer your workplace to a better outcome too.
Second, become an advocate for how AI should be used. AI developments in the US and China will continue to drive AI innovation, but we have some choices when it comes to adoption and use.
So become an “informed buyer”, actively selecting AI technology from companies which have strong ethical, security and privacy standpoints. For instance, I prefer Anthropic’s Claude to OpenAI’s ChatGPT, largely because of the former’s constitutional approach, which means its AIs are trained on a set of principles rather than on what it thinks the user will prefer.
I like Meta’s track record on publishing detailed papers of how it trained and tested its LLMs (a type of AI model), and the fact that it open-sources them. This makes the best models available to a wider and more diverse range of people or organisations, not just to the wealthiest companies. I’m uncomfortable with the way that OpenAI sought to change its non-profit status recently. These are personal opinions and we should each form our own views.
Third, voice your advocacy, to your boss, your local MP, and other decision makers you may come across. It’s only by making AI an everyday topic that we can influence the world we live in. As Tim Cook, CEO of Apple once said, “Artificial intelligence is the future, but we must ensure it is a future that we want.”
Andrew Rogoyski’s department receives research funding from UKRI. He acts as an advisor to TechUK, one of the UK’s leading tech industry trade associations, as is a member of the NatWest Technology Advisory Board.