OpenAI is navigating intense public and internal scrutiny after finalizing the OpenAI Pentagon deal, a landmark agreement allowing the United States Department of Defense to deploy the company’s artificial intelligence models on a classified government network. The swift decision has sparked a massive user exodus, with over 1.5 million subscribers canceling their ChatGPT accounts in less than 48 hours. Many of these departing users are now flocking to rival AI firm Anthropic, propelling its Claude chatbot to the top of the App Store.
The controversial OpenAI Pentagon deal materialized shortly after Anthropic formally refused a similar military partnership. Anthropic executives declined to grant the government unrestricted access to its systems, citing strict boundaries against using their technology for domestic mass surveillance or operating autonomous weaponry without human oversight.
Government Retaliation and Industry Pushback
Anthropic’s refusal triggered immediate retaliation from the federal government. President Donald Trump publicly criticized the company on Truth Social, characterizing Anthropic as “leftwing nut jobs” making a disastrous mistake by trying to strong-arm the military. Trump subsequently ordered federal agencies to phase out Anthropic’s technology within a six-month window. Following the president’s remarks, Defense Secretary Pete Hegseth designated Anthropic as a national security “supply chain risk” and announced an immediate ban on military contractors conducting commercial activity with the startup.
The government’s aggressive stance has unified a broad coalition of technology professionals. Hundreds of tech industry workers—including founders, engineers, and executives from companies like OpenAI, Slack, IBM, Databricks, and Salesforce Ventures—signed an open letter urging the Department of Defense to withdraw the supply chain risk label. The coalition also called on Congress to investigate whether utilizing such extraordinary authorities against an American technology firm is appropriate. Meanwhile, Anthropic has stated that the government’s designation is legally unsound and vowed to contest the label in court.
Protests and Internal Friction at OpenAI
The rapid finalization of the defense contract has created friction both outside and inside OpenAI’s San Francisco headquarters. Chalk-wielding activists gathered outside the office, writing messages on the sidewalks such as “Orwell warned us” and asking employees if they would spy on their neighbors. While one source suggested the protests might have been financed by a competitor using fake activists, the internal dissent is very real. Over 100 current OpenAI staff members signed an open letter imploring company leadership to reject the military’s demands, and prominent research scientist Aidan McLaughlin publicly stated that the deal was not worth the cost.
During a tense all-hands meeting, CEO Sam Altman attempted to fend off the painful backlash. He conceded that the Friday night announcement was opportunistic, poorly executed, and rushed. However, he defended the partnership as a wise decision and reassured staff that the contract includes explicit prohibitions on domestic mass surveillance and maintains human responsibility for the use of force. Sources indicate that OpenAI and the Pentagon have already incorporated additional language into the contract to strengthen these safeguards. Despite a vocal minority of critics, internal communications suggest that the broader sentiment among OpenAI employees remains pragmatic and generally favorable.
Military Needs and the Governance Vacuum
Defense officials argue that strict limitations on artificial intelligence models fundamentally jeopardize national security. Emil Michael, the under secretary of defense for research and engineering, recently spoke at the American Dynamism Summit about the dangers of restrictive AI terms of service. He stated that commercial AI agreements established under the previous administration contained extensive operational limitations that could hinder real-time military missions.
Michael recounted discovering contract stipulations that would prevent the military from planning operations if they could potentially lead to kinetic, explosive outcomes. He noted that these restrictions specifically impacted commands overseeing air operations in regions like China, Iran, and South America. According to Michael, if an operator breached these strict terms, the AI model could theoretically halt in the middle of a critical combat mission.
The rapid pivot of companies like OpenAI into the defense sector exposes a significant governance gap. Unlike traditional defense contractors such as Lockheed Martin, which operate under decades of established security protocols and oversight mechanisms, AI companies are navigating uncharted territory. Existing military frameworks were designed for traditional software and weapons systems, not for complex large language models that continuously learn from new data. As the industry shifts and the military accelerates its artificial intelligence integration, policymakers and technology leaders are effectively writing the rules as they go.
