Artificial intelligence (AI) is playing a central role in the ongoing Middle East war. The United States, for example, has confirmed it is using the technology to identify potential targets and accelerate decision-making.
This is part of a growing trend. And in some cases it’s leading to mounting civilian deaths.
Against this backdrop, Australia’s Department of Defence has just released a new AI policy.
The policy aims to govern the Australian military’s use of AI. So what does it include? And how does it compare to the military AI policies of other countries?
Three main requirements
Australia’s policy establishes three overarching requirements for the Department of Defence’s use of AI.
Firstly, the use of AI must comply with Australian law and international obligations.
Secondly, the use of AI must be underpinned by individual accountability and bounded by consideration of impacts on people. It must also be explainable, reliable and secure, and designed to mitigate unintended bias and harm.
Thirdly, any risks associated with the use of AI must be managed with proportionate control measures, such as testing, training and evaluation.
The policy’s emphasis on proportionate controls is notable.
AI is not a standalone item. It is an enabling technology with many applications that can be embedded across a range of different military functions, such as targeting, logistics, training and maintenance – each raising different risks.
The policy aims to cover all AI technologies, from chatbots to the most advanced “frontier” general-purpose AI models.
The approach echoes the Australian government’s Policy for the Responsible Use of AI in Government, which took effect in September 2024.
That policy explicitly carves out the defence portfolio and national intelligence community. The new policy fills that gap.
Thin on details
The policy says little about how the Army, Navy and Air Force – or other defence entities such as the Australian Strategic Capabilities Accelerator – will actually enact its requirements.
It also says testing and evaluation of the defence department’s use of AI will serve as a key control measure. But it offers no detail on how this will be conducted for military AI – a domain where testing poses well-documented challenges around unpredictable behaviours and unreliable performance in military operating environments.
The Defence AI Centre, established in 2024, is identified as the governance hub. But the policy is thin on implementation, compliance, monitoring, resourcing, or reporting.
How these settings evolve and whether guidance on the implementation of them will follow – and be made public – remains to be seen.
Drawing on precedent
Australia’s policy draws on those of its closest allies.
For example, the United Kingdom adopted its Defence AI Strategy in 2022 and issued the Dependable AI in Defence directive in 2024.
The UK has moved further to appoint “responsible AI” officers within each Ministry of Defence component. It also published a progress report in 2025.
In 2020, the United States Department of Defense adopted AI ethics principles. Two years later, it developed a detailed implementation strategy. Then in January 2026, the current administration announced its AI Strategy for the Department of War. This shifted emphasis toward speed and lethality, mandating “any lawful use” of AI (which doesn’t always equal ethical use) and directing removal of barriers to rapid deployment.
Australia’s defence AI policy generally aligns with the core elements of these like-minded militaries: AI must be used lawfully, humans must remain accountable, and risks must be anticipated, avoided and mitigated.
One notable difference in Australia’s policy is its reference to Article 36 of Additional Protocol I of the Geneva Convention. The policy mandates legal reviews of AI in weapon systems – a meaningful commitment few states have enacted.
Another difference is that Australia’s policy lacks the implementation roadmaps found in the US and UK policies. It reads more like a statement of intent.
It is not clear what consequences, if any, this variation in policy and institutional depth may have for AUKUS Pillar II, which involves cooperation on acceleration and rapid integration of AI and autonomous technologies.
The heightened significance of national frameworks
International efforts to govern military AI are potentially losing momentum. Multinational discussions on autonomous weapons are also deadlocked.
This means national policy frameworks take on greater significance, shaping procurement and signalling to partners what a state considers acceptable practice.
Contemporary uses of military AI in ongoing conflicts – in Iran, in Lebanon, in Gaza, in Ukraine – remind us governance is not an abstract policy exercise.
Australia’s new policy settings are an important step. The test will be whether they are followed by implementation measures robust enough to effectively govern the development and use of military AI.
![]()
As Special Counsel with Lexbridge, Netta Goussac has provided consulting services to the Australian Government. The views and analysis expressed in this article are her own, and do not represent those of Lexbridge, the Australian Government or any other entity.
Zena Assaad does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.


