Artificial intelligence startup Anthropic has officially unveiled a preview of its most advanced frontier system, the Anthropic Mythos model. The highly anticipated debut is part of a collaborative cybersecurity initiative known as Project Glasswing. Through this program, a select group of major technology firms will use the unreleased AI model specifically to strengthen defensive cybersecurity measures and secure critical software infrastructure.
The Anthropic Mythos model represents a significant leap in the company’s artificial intelligence capabilities. Described as the organization’s most powerful system developed to date, Mythos is designed with strong agentic coding and academic reasoning skills. The model is larger and considerably more intelligent than Anthropic’s previously leading Opus models. While Mythos is a general-purpose AI and was not specifically trained for cybersecurity work, its sophisticated performance makes it highly effective at identifying software flaws.
The Launch of Project Glasswing
To harness the capabilities of the Anthropic Mythos model responsibly, the company launched Project Glasswing. This initiative brings together 12 major partner organizations to deploy the AI for defensive security work. The distinguished roster of initial partners includes industry leaders such as Amazon, Apple, Broadcom, Cisco, CrowdStrike, the Linux Foundation, Microsoft, and Palo Alto Networks.
Under Project Glasswing, these technology giants will utilize the Anthropic Mythos model to scan both their own first-party systems and widely used open-source software for dangerous code vulnerabilities. The overarching goal of the initiative is for these partners to eventually share the insights and security methodologies they learn with the broader technology industry, creating a collaborative shield against cyber threats.
Thousands of Zero-Day Vulnerabilities Found
The defensive application of the Anthropic Mythos model is already yielding substantial security results. According to Anthropic, the AI model has been actively scanning systems over the past few weeks and has successfully identified thousands of zero-day vulnerabilities. Many of these newly discovered security flaws are classified as critical.
Remarkably, Anthropic noted that several of the vulnerabilities detected by the Anthropic Mythos model are deeply embedded legacy issues, with some remaining undiscovered in software code for one to two decades. By uncovering these long-hidden flaws, the model is allowing organizations to patch critical vulnerabilities before malicious actors can exploit them.
Limited Release and Security Risks
Despite its proven utility, the Anthropic Mythos model will not see a general public release. Access to the preview is strictly limited. While there are 12 primary partner organizations spearheading Project Glasswing, Anthropic confirmed that a total of 40 organizations will ultimately gain access to the Mythos preview for security testing.
The decision to restrict access stems directly from the model’s immense power. Documents indicate that the Anthropic Mythos model far exceeds the performance of currently available public models in areas like software coding and cybersecurity. Because the AI is exceptionally skilled at finding software bugs, there is a serious concern that if the model were made fully public, bad actors could weaponize it. Instead of fixing code, cybercriminals could use the AI to rapidly discover and exploit vulnerabilities, creating a severe cybersecurity threat.
A History of Leaks and Recent Blunders
The official confirmation of the Anthropic Mythos model follows a series of high-profile data security incidents for the AI startup. News of the powerful new tier originally leaked last month in an incident reported by Fortune. Security researchers discovered a draft blog post left in an unsecured cache of documents on a publicly inspectable data lake.
The leaked internal document, which Anthropic later attributed to human error, referred to the unreleased model by its previous codename, “Capybara.” The memo explicitly stated that Capybara was a new tier of model that was by far the most powerful AI the company had ever developed.
In a separate but related technical blunder last month, Anthropic accidentally exposed nearly 2,000 source code files containing over half a million lines of code. This exposure was caused by a mistake made during the rollout of version 2.1.88 of the Claude Code software package. As the company rushed to clean up the accidental exposure, the mitigation efforts inadvertently caused thousands of code repositories on GitHub to be taken offline.
Ongoing Government Discussions and Legal Battles
As the Anthropic Mythos model rolls out to enterprise partners, its deployment intersects with complex government relations. Anthropic is engaged in ongoing discussions with federal officials regarding the use and deployment of Mythos.
However, these conversations are taking place against the backdrop of a significant legal dispute. Anthropic is currently locked in a legal battle with the Trump administration. The conflict ignited after the Pentagon officially labeled the AI startup a supply-chain risk. The government’s designation was issued in response to Anthropic’s strict refusal to allow its artificial intelligence systems to be used for autonomous targeting or the surveillance of United States citizens.
