Over the weekend, the Wall Street Journal reported that the US military had used Anthropic’s Claude AI chatbot for its invasion of Venezuela and kidnapping of the country’s president Nicolás Maduro.
The exact details of Claude’s use remain hazy, but the incident demonstrated the Pentagon’s prioritization of the use of AI, and how tools available to the public may already be involved in military operations. And when Anthropic learned about it, its response was icy.
An Anthropic spokesperson remained tight-lipped on whether “Claude, or any other AI model, was used for any specific operation, classified or otherwise” in a statement to the WSJ, but noted that “any use of Claude — whether in the private sector or across government — is required to comply with our Usage Policies, which govern how Claude can be deployed.”
The deployment reportedly occurred through the AI company’s partnership with the shadowy military contractor Palantir. Anthropic also signed an up to $200 million contract with the Pentagon last summer as part of the military’s broader adoption of the tech, alongside OpenAI’s ChatGPT, Google’s Gemini, and xAI’s Grok.
Whether the Pentagon’s use of Claude broke any of Anthropic’s rules remains unclear. Claude’s usage guidelines forbid it from being used to “facilitate or promote any act of violence,” “develop or design weapons,” or “surveillance.”
Either way, Trump administration officials are now considering cutting ties with Anthropic over the company’s insistence that mass surveillance of Americans and fully autonomous weaponry remain off limits, Axios reports.
“Everything’s on the table,” including a dialing back of the partnership, a senior administration official told Axios. “But there’ll have to be an orderly replacement [for] them, if we think that’s the right answer.”
Anthropic reportedly reached out to Palantir to figure out whether Claude was used during the attacks on Venezuela, according to Axios’ sourcing, in signs of a broader culture clash and growing concerns of the tech being implicated in military operations.
Late last month, Anthropic already clashed over the limits of its $200 million contract with the Pentagon, including how many law enforcement agencies, such as Immigration and Customs Enforcement and the Federal Bureau of Investigation, could deploy it, per prior reporting by the WSJ.
Anthropic CEO Dario Amodei has repeatedly warned of the inherent risks of the tech his company is working on, calling for more government oversight, regulation, and guardrails. He has also raised concerns over AI being used by autonomous lethal operations and domestic surveillance.
In a lengthy essay posted earlier this year, Amodei argued that large-scale AI-facilitated surveillance should be considered a crime against humanity.
Meanwhile, defense secretary Pete Hegseth doesn’t appear to share these hangups. Earlier this year, he promised that the Pentagon wouldn’t “employ AI models that won’t allow you to fight wars,” a comment that sources told the WSJ concerned discussions with Anthropic.
Anthropic continues to insist that it’s “committed to using frontier AI in support of US national security” in a statement to both the WSJ and Axios.
However, its issues with the Pentagon appear to be going over well with its non-government users, who have been horrified by the tech being implicated in military action.
“Good job Anthropic, you just became the top closed [AI] company in my books,” one top post reads on the Claude subreddit.
More on Anthropic: Anthropic CEO Says Company No Longer Sure Whether Claude Is Conscious
The post Pentagon Issues Threat to Anthropic appeared first on Futurism.


