President Donald Trump has ordered all U.S. federal agencies to immediately stop using Anthropic AI technology. The directive follows an escalating public dispute between the artificial intelligence startup and the Pentagon over the military’s demand for unrestricted access to the company’s systems.
The ban includes a six-month phaseout period for the Defense Department and other government agencies using Anthropic’s products. In a Friday post on the social media platform Truth Social, Trump announced the immediate cessation of all government work with the firm.
“I am directing EVERY Federal Agency in the United States Government to IMMEDIATELY CEASE all use of Anthropic’s technology,” Trump wrote. He described the company as “radical left” and “woke,” adding, “We don’t need it, we don’t want it, and will not do business with them again!” He further claimed that Anthropic had made a catastrophic error by attempting to coerce the Defense Department.
The Pentagon Ultimatum and Anthropic’s Refusal
The presidential ban was issued shortly after Anthropic refused to comply with a strict deadline set by Defense Secretary Pete Hegseth. The Pentagon had issued an ultimatum demanding that the company remove all protective restrictions on its Claude AI model by 5:01 p.m. on Friday. The Defense Department sought to alter a previous agreement, pushing to allow the technology for “all lawful applications” without the company’s established safeguards.
Anthropic strongly objected to these modifications. The company argued that granting unrestricted access could facilitate the use of artificial intelligence for widespread domestic mass surveillance or the complete control of lethal autonomous weapons.
Anthropic CEO Dario Amodei firmly rejected the government’s demands, stating that his organization “cannot in good conscience comply” with the Pentagon’s final request. In a Thursday statement, Amodei clarified the company’s position, noting they had never attempted to limit the technology in an ad hoc manner or object to specific military operations.
“However, in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values,” Amodei said. He added that certain military applications remain beyond the capabilities of current technology to execute safely and reliably.
Escalating Threats and Personal Attacks
The standoff involved intense negotiations and personal criticisms. U.S. Under Secretary of Defense Emil Michael reportedly accused Amodei of having a “God-complex,” alleging that the tech executive was attempting to gain personal control over the U.S. military while jeopardizing national security.
Meanwhile, Hegseth criticized Anthropic’s leadership as “sanctimonious” and “arrogant.” Following Trump’s announcement, Hegseth posted on the social media site X, stating that the company had “delivered a master class in arrogance and betrayal as well as a textbook case of how not to do business with the United States Government or the Pentagon.”
During the negotiations, the Pentagon issued multiple threats to ensure compliance. Hegseth warned that the government might designate Anthropic as a “supply-chain risk,” a label traditionally reserved for businesses tied to foreign adversaries. This designation would have barred other defense contractors from using the company’s technology for military work. The Defense Department also threatened to invoke the Defense Production Act, a Cold War-era statute that grants the president authority to compel domestic industries to disclose technology in the name of national defense. Trump’s outright ban ultimately stopped short of utilizing this specific legal authority.
Background on the Government Partnership
Founded in 2021 by seven former OpenAI employees, Anthropic was the first major AI laboratory to collaborate with the U.S. military on classified systems. The company secured a $200 million contract with the Defense Department in July to prototype advanced AI capabilities for national security.
Under this agreement, the company developed customized models known as Claude Gov. These models operate with fewer restrictions than the standard commercial versions and are accessed through platforms provided by Amazon’s cloud services and Palantir. The military primarily uses the technology for routine tasks like document summarization and report generation, though it is also utilized for intelligence assessments and military strategy.
Tensions regarding the partnership reportedly began to surface earlier this year. Officials debated the application of the technology following a January military raid aimed at apprehending Venezuelan leader Nicolás Maduro, an event that raised reservations within Anthropic. The dispute further intensified over contrasting narratives concerning a theoretical conversation about a nuclear attack on the United States, which highlighted the severe implications of deploying lethal autonomous weaponry.
Anthropic’s resistance has garnered significant support across the broader technology sector. Unions and worker groups representing approximately 700,000 employees at Amazon, Google, and Microsoft released a joint statement endorsing Anthropic’s stance. The groups urged their respective employers to reject similar military demands, warning that capitulating to the Pentagon’s intimidation would further implicate tech workers in violence and repression. The startup, currently valued at roughly $380 billion, remains legally bound to balance profit with its public benefit mission of developing artificial intelligence responsibly.
