OpenAI has officially secured an agreement with the United States Department of Defense, now referred to by the Trump administration as the Department of War, to deploy its artificial intelligence models inside classified military networks. OpenAI chief executive officer Sam Altman announced the finalization of the contract late Friday evening on the social media platform X. This major development arrived just hours after President Donald Trump issued a directive banning federal agencies from using artificial intelligence technology created by rival company Anthropic.
The new OpenAI Pentagon deal introduces specific technical safeguards that address the exact safety limitations Anthropic had previously demanded. Altman confirmed that the military agreed to clear boundaries regarding how the artificial intelligence can be utilized. This swift partnership highlights a significant shift in how the federal government approaches technology contracts and navigates relationships with major software developers.
Safety Principles and Technical Safeguards
According to Altman, the newly signed agreement includes strict limitations on the military’s application of OpenAI’s technology. The contract explicitly prohibits the use of the artificial intelligence models for domestic mass surveillance. Additionally, it mandates that human beings must remain responsible for any use of force, a rule that strictly forbids the artificial intelligence from operating fully autonomous weapon systems.
Altman stated that the Department of War fully concurs with these foundational safety principles, noting that the military has incorporated these rules directly into the agreement and existing policies. To enforce these boundaries, OpenAI plans to build specialized technical safeguards to ensure its models behave exactly as intended. Furthermore, the company will deploy its own engineers to work alongside military personnel, providing direct assistance and monitoring the safety of the systems within the classified infrastructure.
The Anthropic Ban and Government Ultimatum
The OpenAI Pentagon deal materialized in the immediate aftermath of a highly publicized standoff between the federal government and Anthropic. Earlier on Friday, President Trump ordered his administration to immediately halt all use of Anthropic’s technology across federal agencies. Defense Secretary Pete Hegseth outlined a six-month transition period for the government to phase out the company’s services and switch to alternative providers.
The core of the dispute revolved around usage terms. Anthropic had engaged in extensive discussions with the military, advocating for the exact same limitations on domestic mass surveillance and fully lethal autonomous weapons that OpenAI eventually secured. However, during these negotiations, the military insisted on language that would allow the technology to be used for “all lawful purposes.”
Tensions reached a breaking point when the Pentagon issued a strict Friday afternoon deadline for Anthropic to accept its final offer. Following the presentation of the government’s terms, Anthropic leadership announced on Thursday that the company could not in good conscience agree to the military’s conditions.
Supply Chain Risk Designation
Following the missed deadline, Defense Secretary Hegseth escalated the situation by officially designating Anthropic as a supply chain risk. This severe classification is typically reserved exclusively for foreign adversaries, making its application to a domestic American company highly unusual.
Hegseth argued that Anthropic’s stance is fundamentally at odds with American principles. He declared that this new designation strictly prohibits any United States military contractors, partners, or suppliers from engaging in any commercial activity with the artificial intelligence firm. In a public statement, Hegseth proclaimed that the nation’s war efforts will never be held hostage by the ideological whims of big tech, adding that his decision is final.
Legal Pushback and Industry De-Escalation
Anthropic responded aggressively on Friday night, releasing a detailed statement condemning the government’s actions. The company described the supply chain risk designation as an unprecedented move against an American business. Anthropic representatives argued that the decision is legally questionable and establishes a perilous precedent for any domestic firm attempting to negotiate government contracts.
The company revealed it received no direct communication from the White House or the Pentagon regarding the designation and vowed to fight the action in court. Anthropic maintains that Hegseth lacks the legal authority to block military contractors from working with the company on projects for other, non-military clients. They emphasized that their requested safety exceptions have never hindered any actual government mission.
Meanwhile, Altman has publicly called for a reduction in tensions across the industry. He urged the Department of War to offer the same usage terms granted to OpenAI to all other artificial intelligence developers, stating his belief that everyone would be willing to accept them. Altman expressed a strong desire to see the situation de-escalate, steering away from harsh governmental and legal actions in favor of reasonable, mutually agreeable partnerships.
