Pentagon Refuses to Say If AI Was Used to Select Elementary School as Bombing Target

In the aftermath of airstrikes that leveled a school and claimed the lives of 165 Iranian elementary students and staff, the Pentagon has refused to say whether the attack was suggested by an AI system.

The grotesque possibility isn’t as far-fetched as it sounds. According to bombshell reporting by the Wall Street Journal, the Pentagon used Anthropic’s Claude AI model in planning military strikes on Iran over the weekend — and is likely still using it as the Trump administration’s attacks carry on.

In the opening salvo, either the US or Israel — though available information points to the former — obliterated the Shajareh Tayyebeh girls’ school, located in the South-Iranian city of Minab. Most of the people killed in the strike, Al Jazeera reports, were elementary students aged between seven and 12. At least 95 other people were injured in the attack.

Making matters even more grim is reporting from Middle East Eye that Shajareh Tayyebeh was hit a second time after the initial missile strike, maiming first responders and parents that had come to collect their children. That so-called “double tap” harkens back to US bombings on civilian boats in Venezuela under Donald Trump and air strikes in Pakistan under Barack Obama.

Given the United States’ reported use of AI to select at least some military targets in Iran, a major question remains unanswered: did the US use Claude to decide whether to annihilate an elementary school?

When Futurism reached out to the Pentagon regarding the use of AI in recent military operations — specifically the targeting of the Shajareh Tayyebeh girls’ school — we were referred to US CENTCOM, one of eleven unified commands under the Pentagon’s umbrella. CENTCOM didn’t provide any further information.

The claim that the US military is using Claude to conduct a war that has claimed over 1,000 lives in under a week may seem too galling to believe. Unfortunately, it’s a tune we’ve heard before.

Back in April of 2024, an investigation by +972 Magazine revealed that the Israeli army had leveraged an AI system called “Lavender” to select targets in its war on Gaza, similarly to how the Pentagon is reportedly using Claude in Iran. According to six Israeli intelligence officers, Lavender played a “central role” in the destruction of Gaza and its population, identifying at least 37,000 Palestinians as targets for aerial assassination.

As one intelligence operative told +972, Lavender’s decisions — which often involved suggestions to attack targets in their homes — were treated “as if it were a human decision” by military operatives.

The ethical consequences of such a system are hard to overstate. One Israeli military source told the Guardian: “I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added-value as a human, apart from being a stamp of approval. It saved a lot of time.”

The trend presages a brutal new era of warfare in which it’s no longer clear whether humans, or at least humans alone, are making life-and-death decisions about where to deploy the deadliest arsenal in human history — even when the casualties are dozens of schoolchildren.

Do you have any information about how the US military is using AI? Send us a tip: tips@futurism.com — we can keep you anonymous.

More on military operations: Polymarket Quietly Takes Down Bet On Nuclear Detonation

The post Pentagon Refuses to Say If AI Was Used to Select Elementary School as Bombing Target appeared first on Futurism.

Scroll to Top