The ongoing attacks on the Islamic Republic of Iran, launched by a joint coalition of US and Israeli military forces, have so far claimed 555 Iranian lives, including 165 deaths from an attack on an elementary school in Southern Iran.
As the Wall Street Journal reported as the attacks unfolded the military strike force had a hand in selecting its targets from Anthropic’s Claude chatbot.
According to the paper, Anthropic’s large language model, Claude, is the key “AI tool” used by US Central Command in the Middle East. Its tasks include assessing intelligence, simulated war games, and even identifying military targets — in short, helping military leaders plan attacks that have already claimed hundreds of lives.
Anthropic’s role in the devastating attacks might come as news for anyone who thought the company’s ethical redlines precluded it from any military work whatsoever. The company and its CEO, Dario Amodei, have been roiled in a messy conflict with the Trump administration over two particular moral boundaries: the use of Claude for surveillance of US citizens, and for fully-autonomous, lethal weaponry.
It appears that using Claude to select targets, though, isn’t brushing up against the bot’s ethical guardrails.
That’s striking, because Anthropic has spent the latter part of February embroiled in conflict with the Pentagon over the use of Claude.
Last week, the Pentagon — which currently uses Claude throughout its classified systems — set a deadline for Anthropic to drop those dual redlines of surveillance and fully autonomous weaponry. Anthropic let that deadline go by without caving, establishing what many understood as a principled stance against the Trump administration’s militarism.
Yet as Pulitzer prize winning national security journalist Spencer Ackerman observed, it’s important to note what Anthropic’s ethical lines ignored when it inked its deal with the military in the first place.
“Amodei, it is highly conspicuous, doesn’t register building a surveillance panopticon of foreigners as a problem,” Ackerman wrote. “The time to worry about everything ostensibly concerning Amodei was before signing the contract that Amodei didn’t wish to abandon. America is in such steep decline that we don’t even make Oppenheimers like we used to.”
“When you take Doctor Doom’s money to provide him a lathe to construct components for anthropomorphic robots,” Ackerman scathed, “do you not understand that he is going to build Doombots?”
More on Claude: Anthropic Drops Its Huge Safety Pledge That Was Supposedly the Whole Point of the Company
The post US Military Using Claude to Select Targets in Iran Strikes appeared first on Futurism.


