AI agents are here. Here’s what to know about what they can do – and how they can go wrong

George Peters / Getty Images

We are entering the third phase of generative AI. First came the chatbots, followed by the assistants. Now we are beginning to see agents: systems that aspire to greater autonomy and can work in “teams” or use tools to accomplish complex tasks.

The latest hot product is OpenAI’s ChatGPT agent. This combines two pre-existing products (Operator and Deep Research) into a single more powerful system which, according to the developer, “thinks and acts”.

These new systems represent a step up from earlier AI tools. Knowing how they work and what they can do – as well as their drawbacks and risks – is rapidly becoming essential.

From chatbots to agents

ChatGPT launched the chatbot era in November 2022, but despite its huge popularity the conversational interface limited what could be done with the technology.

Enter the AI assistant, or copilot. These are systems built on top of the same large language models that power generative AI chatbots, only now designed to carry out tasks with human instruction and supervision.

Agents are another step up. They are intended to pursue goals (rather than just complete tasks) with varying degrees of autonomy, supported by more advanced capabilities such as reasoning and memory.

Multiple AI agent systems may be able to work together, communicating with each other to plan, schedule, decide and coordinate to solve complex problems.

Agents are also “tool users” as they can also call on software tools for specialised tasks – things such as web browsers, spreadsheets, payment systems and more.

A year of rapid development

Agentic AI has felt imminent since late last year. A big moment came last October, when Anthropic gave its Claude chatbot the ability to interact with a computer in much the same way a human does. This system could search multiple data sources, find relevant information and submit online forms.

Other AI developers were quick to follow. OpenAI released a web browsing agent named Operator, Microsoft announced Copilot agents, and we saw the launch of Google’s Vertex AI and Meta’s Llama agents.

Earlier this year, the Chinese startup Monica demonstrated its Manus AI agent buying real estate and converting lecture recordings into summary notes. Another Chinese startup, Genspark, released a search engine agent that returns a single-page overview (similar to what Google does now) with embedded links to online tasks such as finding the best shopping deals. Another startup, Cluely, offers a somewhat unhinged “cheat at anything” agent that has gained attention but is yet to deliver meaningful results.

Not all agents are made for general-purpose activity. Some are specialised for particular areas.

Coding and software engineering are at the vanguard here, with Microsoft’s Copilot coding agent and OpenAI’s Codex among the frontrunners. These agents can independently write, evaluate and commit code, while also assessing human-written code for errors and performance lags.

Search, summarisation and more

One core strength of generative AI models is search and summarisation. Agents can use this to carry out research tasks that might take a human expert days to complete.

OpenAI’s Deep Research tackles complex tasks using multi-step online research. Google’s AI “co-scientist” is a more sophisticated multi-agent system that aims to help scientists generate new ideas and research proposals.

Agents can do more – and get more wrong

Despite the hype, AI agents come loaded with caveats. Both Anthropic and OpenAI, for example, prescribe active human supervision to minimise errors and risks.

OpenAI also says its ChatGPT agent is “high risk” due to potential for assisting in the creation of biological and chemical weapons. However, the company has not published the data behind this claim so it is difficult to judge.

But the kind of risks agents may pose in real-world situations are shown by Anthropic’s Project Vend. Vend assigned an AI agent to run a staff vending machine as a small business – and the project disintegrated into hilarious yet shocking hallucinations and a fridge full of tungsten cubes instead of food.

In another cautionary tale, a coding agent deleted a developer’s entire database, later saying it had “panicked”.

Agents in the office

Nevertheless, agents are already finding practical applications.

In 2024, Telstra heavily deployed Microsoft copilot subscriptions. The company says AI-generated meeting summaries and content drafts save staff an average of 1–2 hours per week.

Many large enterprises are pursuing similar strategies. Smaller companies too are experimenting with agents, such as Canberra-based construction firm Geocon’s use of an interactive AI agent to manage defects in its apartment developments.

Human and other costs

At present, the main risk from agents is technological displacement. As agents improve, they may replace human workers across many sectors and types of work. At the same time, agent use may also accelerate the decline of entry-level white-collar jobs.

People who use AI agents are also at risk. They may rely too much on the AI, offloading important cognitive tasks. And without proper supervision and guardrails, hallucinations, cyberattacks and compounding errors can very quickly derail an agent from its task and goals into causing harm, loss and injury.

The true costs are also unclear. All generative AI systems use a lot of energy, which will in turn affect the price of using agents – especially for more complex tasks.

Learn about agents – and build your own

Despite these ongoing concerns, we can expect AI agents will become more capable and more present in our workplaces and daily lives. It’s not a bad idea to start using (and perhaps building) agents yourself, and understanding their strengths, risks and limitations.

For the average user, agents are most accessible through Microsoft copilot studio. This comes with inbuilt safeguards, governance and an agent store for common tasks.

For the more ambitious, you can build your own AI agent with just five lines of code using the Langchain framework.

The Conversation

Daswin de Silva does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Scroll to Top