
Last week, one of Australia’s leading artificial intelligence (AI) researchers, Toby Walsh, warned Australia’s lack of guardrails for AI is putting young people at risk of being “sacrificed for the profits of big tech”.
Walsh’s remarks came after the government scrapped its own proposal to establish an advisory body of AI experts. Instead, the government offered its National AI Plan, which, among others, stresses investment in data centres, telecommunications infrastructure, and workforce training.
The plan also envisages an “AI Safety Institute” (currently recruiting staff), and also some internal AI transparency measures for the public sector. Transparency results so far have not been great.
What does it all add up to for AI regulation in Australia?
What are other countries doing?
The European Union has attracted attention for its AI Act, which already prohibits such things as using AI systems to exploit vulnerable groups or individuals. However, Europe is struggling to implement rules on high-risk AI uses that are not prohibited.
Several governments in Australia’s region are also passing AI laws, mainly to give themselves the powers to respond when they deem it necessary.
South Korea, Japan and Taiwan – none of them minor AI players – all have newly minted laws, which are meeting the expected pushback from industry.
Not everyone has comprehensive rules
There are countries without any kind of comprehensive AI regulation, including the United States and the United Kingdom.
In the US, president Donald Trump has even prohibited most state-based regulation in relation to private AI uses. Despite the anti-safeguards language, the government has quietly retained strong safeguards for federal use of AI.
The UK has followed an even more erratic path, to end up in a similar place to Australia. Incapable of deciding what to do, it has tried to provide technical (non-legal) safeguards. This has been done through the creation of the first AI Safety (now Security) Agency, hailed by some, derided by others.
The dilemma of control
The differences in approach between countries are not surprising. Governments face the dilemma of control described by English technology scholar David Collingridge almost 50 years ago:
when [regulatory] change is easy, the need for it cannot be foreseen; when the need for change is apparent, change has become expensive, difficult and time consuming.
What’s more, Australia has limited regulatory clout regarding AI. It is not a significant global AI player in the way it is, for example, in mining, so its influence is limited.
Facing these uncertainties, what should Australia be doing?
Australia’s plan for AI safety
One certainty is that erratic behaviour is not a great option. We have good evidence that regulatory predictability matters for innovation.
In a recent speech, Australia’s Assistant Minister for Science, Technology and the Digital Economy, Andrew Charlton acknowledged this:
one of the important insurance policies we have is regulatory certainty, underpinned by clear principles with broad buy-in.
So, what is the government’s plan?
The official plan to keep Australians safe is a section (action 7) in the National AI Plan. It argues existing Australian frameworks “can apply to AI and other emerging technologies”.
In other words, AI systems and tools can be covered by the rules we already have, such as consumer protections against all misleading and deceptive practices. The government suggested this option back in 2024. (We have previously argued this view, favoured by the Productivity Commission, is not well supported and was not our preferred option.)
Problems with the plan
However, the challenges for applying existing laws, which the government identified years ago, have not gone away.
As we identified in 2023, the existing regulatory frameworks have limitations when it comes to AI.
AI systems are complex, they can act semi-autonomously, and it can be difficult to understand why they do what they do. This makes it very hard to effectively attribute liability or responsibility for AI risks or harms using existing laws and processes.
Regrettably, those limitations have not been addressed systematically – if at all.
Fragmented rules and limited resources
As things stand, the regulatory landscape is highly fragmented and uncertain.
For instance, there are at least 21 mandatory (or quasi-mandatory) state and federal policies about the use of AI in government. Courts have so far had little opportunity to clear things up, with almost no test cases in crucial areas of existing law, including negligence, administrative law, discrimination law, and consumer law.
The new plan is accompanied by a clear commitment to monitor the development and deployment of AI “and respond to challenges as they arise, and as our understanding of the strengths and limitations of AI evolves”.
The issue is: how will that monitoring happen? Will the government really “empower every existing agency across government to take responsibility for AI”?
Dealing with issues such as privacy, consumer protection, anti-discrimination will take money and commitment and a degree of coordination between agencies we have not witnessed to date.
An uncertain future
For predictability, signals matter. A lot.
If there is a change in government in the US in 2028, will that change how Australia regulates AI – in the same way the beginning of the Trump presidency coincided with the abandonment of Australia’s mandatory AI guardrails proposals?
Is a laissez-faire regulatory approach creating predictability, when we have so many stalled and part-completed regulatory processes?
The government seems to expect courts, government agencies, businesses and individuals to work out on their own how to retrofit old laws and institutions to a new technological landscape.
There is some hope for regulation of automated decision-making in the public sector (promised after the Robodebt Royal Commission). For the rest, it’s a “wait and see” approach to AI regulation. We’ll have to wait and see if it works.
![]()
José-Miguel Bello y Villarino receives funding from the Australian Research Council. He frequently provides unpaid advice on AI matters to federal and state governments, including as a former observer to the interim AI expert group established by then minister Ed Husic in 2024.
Henry Fraser has previously received funding from the Australian Research Council. He frequently engages with government agencies on regulation of automated decision making on an unpaid basis, sharing his research with them.


