The rivalry between artificial intelligence giants OpenAI and Anthropic has erupted into an aggressive battle for enterprise dominance and government contracts. Following a U.S. Department of Defense decision to blacklist Anthropic as a national security risk, OpenAI is swiftly moving to capture its competitor’s core business. This corporate warfare represents the climax of a bitter, decade-long AI industry feud between OpenAI CEO Sam Altman and Anthropic CEO Dario Amodei.
OpenAI is actively targeting Anthropic’s stronghold in the business sector, specifically challenging successful enterprise products like Claude Code and Cowork. To achieve this, OpenAI is pitching a $10 billion joint venture to private equity firms, seeking around $4 billion in combined commitments.
To sweeten the deal, OpenAI is offering a guaranteed minimum return of 17.5 percent—a financial promise Anthropic does not match. Major firms including TPG, Advent International, Bain Capital, and Brookfield Asset Management are in discussions. However, some companies like Thoma Bravo have walked away due to concerns about the long-term profitability of the partnerships.
The underlying strategy is to absorb the massive upfront costs of deploying engineers to customize AI models for corporate clients. Once a business integrates a customized model into its systems, the switching costs become significant, effectively locking in customers.
The Pentagon Fallout
The business clash follows a major rift over government work. After the Pentagon blacklisted Anthropic, Amodei’s company filed a lawsuit against the Trump administration. Within hours of the blacklisting, Altman secured a deal for OpenAI to perform classified military work.
In response to the Pentagon deal, Amodei took to a company Slack channel to call OpenAI “mendacious.” He stated the situation reflected a familiar pattern of behavior he had frequently seen from Altman.
Deep-Rooted Personal Hostility
The tensions between the two leaders are highly personal and stem from their time working together. The AI industry feud began around 2016 when Amodei, his sister Daniela, and OpenAI co-founder Greg Brockman debated the future of artificial intelligence in a San Francisco shared house. Disagreements over transparency, safety, and power eventually fractured their relationship.
Amodei was horrified by early staff cuts ordered by Elon Musk and strongly opposed Brockman’s suggestion to sell artificial general intelligence to United Nations Security Council nuclear powers, which Amodei viewed as nearly treasonous. Conflicts peaked when Amodei blocked Brockman from working on the GPT language models, and Altman accused the Amodei siblings of plotting against him—a claim explicitly denied by other executives.
By late 2020, Dario and Daniela Amodei, along with nearly a dozen other employees, left OpenAI to found Anthropic. Before departing, Amodei demanded to report directly to the board and refused to work with Brockman.
Public Attacks and Ideological Divides
Today, the animosity is visible in both public and private spheres. Amodei has internally compared Altman’s legal fight with Elon Musk to “Hitler versus Stalin” and labeled Brockman’s $25 million donation to a pro-Trump super political action committee as “evil.” He has also likened OpenAI to tobacco companies knowingly hawking a harmful product.
This ideological divide played out publicly when Anthropic ran a Super Bowl ad slyly mocking OpenAI’s decision to put advertisements in its chatbot. The frosty relationship was also evident at a February 2026 AI summit in New Delhi. While the Indian Prime Minister and assembled tech leaders raised their hands together for a closing photo, Amodei and Altman awkwardly bumped elbows instead.
Emerging Cyber Threats
As the companies battle for market share and race toward initial public offerings, the technology itself is presenting severe new dangers. Anthropic recently issued a private warning to government officials about its forthcoming model, “Mythos.” An unreleased blog post obtained by Fortune describes Mythos as far exceeding defenders’ capabilities, warning it will significantly increase the likelihood of massive cyberattacks by 2026.
This threat is already materializing. Anthropic reported that a state-sponsored Chinese group recently used AI agents to autonomously hack roughly 30 global targets, with artificial intelligence managing up to 90 percent of the tactical operations without human intervention.
The rise of these autonomous tools has alarmed the cybersecurity community. Employees are frequently creating their own agents at home and inadvertently connecting them to internal work systems, creating vulnerabilities known as “shadow AI.” A recent Dark Reading poll revealed that 48 percent of cybersecurity experts now view agentic AI as the primary attack vector for 2026, surpassing concerns over deepfakes.
