OpenAI has reached a new agreement with the United States Department of War to deploy its advanced artificial intelligence models within classified military networks. The OpenAI Pentagon deal introduces a multi-layered approach to safety, establishing firm boundaries on how the government can use the technology. This partnership emerges during a turbulent period for the artificial intelligence industry, occurring just hours after rival company Anthropic saw its own negotiations with the government collapse.
Following the breakdown of Anthropic’s talks on Friday, President Donald Trump ordered federal agencies to phase out Anthropic’s technology over a six-month transition period. Simultaneously, Secretary of Defense Pete Hegseth designated Anthropic as a supply-chain risk. Shortly after these events, OpenAI CEO Sam Altman announced his company’s successful agreement on the social media platform X, noting that the Defense Department displayed a deep respect for safety and a desire to partner for the best possible outcome.
Despite securing the contract, Altman acknowledged the rapid pace of the agreement. He admitted the decision was definitely rushed and that the optics do not look good. The announcement triggered immediate public backlash, contributing to Anthropic’s Claude application overtaking OpenAI’s ChatGPT in Apple’s App Store by Saturday.
Establishing Ethical Guardrails and Redlines
To address concerns regarding military artificial intelligence, OpenAI outlined three primary redlines that restrict the Department of War’s use of its models. First, the technology cannot be used for mass domestic surveillance. Second, the military is prohibited from using the models to direct autonomous weapons systems. Finally, the agreement forbids the use of OpenAI technology for high-stakes automated decisions, such as social credit systems.
Unlike other companies that have reduced safety guardrails in favor of usage policies for national security deployments, OpenAI claims its approach offers stronger protection against unacceptable use. The company asserts that its contract provides better guarantees and more responsible safeguards than any previous agreement for classified deployments.
OpenAI maintains full discretion over its safety stack and will not deploy its models without these guardrails in place. If the government violates the terms of the agreement, OpenAI retains the right to terminate the contract.
Cloud-Only Deployment Architecture
A central component of the OpenAI Pentagon deal is its strict reliance on cloud infrastructure. Katrina Mulligan, OpenAI’s head of national security partnerships, emphasized that deployment architecture matters more than contract language.
By limiting the deployment to a cloud application programming interface, OpenAI ensures its models cannot be deployed on edge devices. This architectural decision physically prevents the models from being integrated directly into weapons systems, operational hardware, or sensors that could power fully autonomous lethal weapons.
Furthermore, the agreement mandates that cleared OpenAI personnel remain involved. Forward-deployed engineers, alongside safety and alignment researchers, will assist the government while independently verifying that the established redlines are not crossed.
Contractual Protections and Legal Debates
The contract language strictly binds the military’s use of the system to existing laws and oversight protocols. It requires human control over weapons systems, referencing Department of Defense Directive 3000.09, which mandates rigorous testing before deploying autonomous systems.
For intelligence activities, the contract stipulates compliance with the Fourth Amendment, the National Security Act of 1947, the Foreign Intelligence and Surveillance Act of 1978, and the Posse Comitatus Act. It explicitly prohibits the unconstrained monitoring of private information belonging to United States persons.
However, the inclusion of Executive Order 12333 in the contract has sparked criticism. Mike Masnick of Techdirt argued that the agreement absolutely does allow for domestic surveillance. Masnick noted that the National Security Agency utilizes Executive Order 12333 to capture communications by tapping into lines outside the United States, which can still collect information on American citizens. OpenAI disputes this interpretation, maintaining that the contract explicitly excludes mass domestic surveillance from lawful use.
Industry De-escalation Efforts
Altman explained that a primary motivation behind the accelerated agreement was a desire to de-escalate rising tensions between the Department of War and artificial intelligence laboratories. He stated that a good future requires deep collaboration between the government and the technology sector.
As part of the negotiations, OpenAI requested that the Pentagon make the same contractual terms available to all artificial intelligence companies. Altman urged the government to resolve its ongoing dispute with Anthropic, warning that the current situation is a very bad way to initiate the next phase of collaboration. OpenAI also made its position clear to the government that Anthropic should not be designated as a supply chain risk.
While Altman hopes the deal will reduce industry friction, he acknowledged the risk of the strategy. He noted that if the agreement successfully de-escalates tensions, OpenAI will appear as a company that absorbed significant pain to help the industry. If the effort fails, he admitted the company will continue to be characterized as rushed and uncareful.
