The U.S. military used Anthropic’s Claude artificial intelligence model during an operation to capture Nicolás Maduro in Venezuela, according to a Wall Street Journal report that was later echoed by other outlets. Reports say the deployment was tied to Anthropic’s relationship with Palantir Technologies, whose platforms are widely used across the U.S. Defense Department and federal law enforcement.
The reported use of a major commercial AI model in a real-world military mission is raising fresh questions about how “safety-first” AI rules apply in national security settings. A senior Trump administration official told Axios the Pentagon would re-evaluate its partnership with Anthropic after the news became public, describing concerns about whether the company could disrupt military operations.
What the reports say happened
A Reuters report, citing the Wall Street Journal, said Claude was used in the U.S. military’s operation to capture “former Venezuelan President Nicolas Maduro,” and that the Journal cited people familiar with the matter. Reuters said it could not immediately verify the report and that the U.S. Defense Department, the White House, Anthropic, and Palantir did not immediately respond to requests for comment.
Axios reported that two sources familiar with the matter said the U.S. military used Claude during the mission to apprehend Maduro, and that Claude was used during the operation rather than only ahead of it. Axios also said it could not confirm the specific role Claude played in the capture.
Accounts differ on what the operation involved and its toll. India Today described the mission as involving bombing several sites in Caracas and said the operation targeted Maduro and his wife. Axios reported that no American casualties were reported, while Cuba and Venezuela said many of their soldiers and security forces were killed. Firstpost wrote that the raid “left dozens of Venezuelans and Cubans dead,” while also saying the exact role of Claude remains classified.
How Claude allegedly entered the mission
Several reports pointed to Palantir as a key channel for Claude’s access inside government systems. Reuters, citing the Wall Street Journal, said Claude’s deployment came via Anthropic’s partnership with Palantir Technologies. India Today similarly reported that Claude’s deployment happened through Anthropic’s partnership with Palantir, describing Palantir’s software platforms as commonly used by the Defense Department and federal law enforcement agencies.
Axios said it was unclear whether Claude’s use in the operation was tied to the Anthropic-Palantir partnership, even while noting the broader relationship between the companies. Still, Axios argued the episode highlights why the Pentagon values AI models that can process information quickly in real time during fast-moving military situations.
Safety rules meet battlefield use
Anthropic’s usage policies were a central point of tension in multiple accounts. Reuters reported that Anthropic’s usage policies forbid using Claude to support violence, design weapons, or carry out surveillance. India Today also highlighted that Anthropic’s guidelines prohibit using Claude to facilitate violence, develop weapons, or conduct surveillance, while noting the reported Venezuela operation involved strikes in Caracas.
Anthropic pushed back on at least one claim tied to the controversy. Axios reported that a senior administration official alleged Anthropic asked whether its software was used in the mission, but Axios said an Anthropic representative denied the company made such an inquiry. Axios also quoted an Anthropic spokesperson saying the company cannot comment on whether Claude (or any other model) was used in any specific operation, and that any use must comply with Anthropic’s usage policies. India Today similarly quoted an Anthropic spokesperson telling the Wall Street Journal that the company could not comment on whether Claude was used in any specific operation, classified or otherwise, while emphasizing compliance with usage policies.
Pentagon pressure and the contract stakes
The reports also connect the episode to a broader Pentagon push to bring cutting-edge AI onto classified networks. Reuters said the Pentagon is pressing top AI companies, including OpenAI and Anthropic, to make AI tools available on classified networks with fewer of the standard restrictions companies apply to users. Reuters also reported that many AI tools built for the U.S. military run only on unclassified networks, and that Anthropic is the only one available in classified settings through third parties, while remaining bound by its usage policies.
Firstpost reported that the Pentagon is “moving to deploy frontier AI capabilities across all classification levels,” citing an official who requested anonymity. Firstpost also said a senior Trump administration official described the Pentagon as taking a fresh look at its relationship with Anthropic after the news broke.
Both Firstpost and India Today referenced a $200 million contract and described it as facing scrutiny in the wake of the reported operation. India Today said previous reporting indicated concerns about how Claude could be used by the military had led officials to consider canceling the agreement.
Defense Secretary Pete Hegseth’s public comments were also cited as reflecting the Pentagon’s drive to adopt AI faster. Firstpost reported that Hegseth said in January the Defense Department would not “employ AI models that won’t allow you to fight wars,” while India Today said he made that remark while referencing discussions officials have had with Anthropic.
