Anthropic announced Claude Mythos Preview on April 7 — a frontier model so capable at finding software vulnerabilities that the company decided not to release it publicly. Instead, Anthropic created Project Glasswing, a gated research partnership that gives select organizations access to the model for defensive security work only.
This is not a standard product launch. It is a controlled deployment of a model that has already discovered thousands of previously unknown zero-day vulnerabilities across critical infrastructure, including every major operating system and every major web browser.
What Mythos Preview Can Do
Claude Mythos Preview is a general-purpose frontier model — Anthropic’s most capable yet for coding and agentic tasks. Its cybersecurity strength is a direct consequence of that broader capability: a model that can deeply understand and modify complex software is also one that can find and fix its vulnerabilities.
During internal testing, Mythos Preview identified thousands of zero-day vulnerabilities, many classified as critical. The specifics are striking:
- A 27-year-old denial-of-service vulnerability in OpenBSD’s TCP SACK implementation — present since 1999, missed by every prior audit
- A 16-year-old vulnerability in FFmpeg’s H.264 codec that had been overlooked by every fuzzer and human reviewer
- A 17-year-old remote code execution flaw in FreeBSD’s NFS server (CVE-2026-4747) that granted unauthenticated root access
These are not theoretical findings. They are exploitable bugs in production software used by millions, some of which had been hiding in plain sight for decades.
The model’s vulnerability discovery is not a specialized feature — it emerges from the same capabilities that make it effective at general coding and reasoning. A model that can trace execution paths through millions of lines of code, understand memory management at a systems level, and reason about edge cases is naturally suited to finding the kinds of bugs that escape conventional tools and human review.
Why Anthropic Won’t Release It
Anthropic’s reasoning is straightforward: the same capabilities that make Mythos Preview exceptional at finding vulnerabilities could be used to exploit them. A publicly available model with this level of security capability would be a tool for attackers as much as defenders.
This is a concrete instance of the dual-use problem that the AI safety community has debated for years. Anthropic’s response is not to withhold the model entirely, but to control who uses it and how.
The company has stated it does not plan to make Mythos Preview generally available. The eventual goal is to deploy Mythos-class models at scale once new safeguards are in place, but there is no timeline for when that might happen.
Project Glasswing
Project Glasswing is the framework Anthropic built to deploy Mythos Preview responsibly. It is a gated research partnership that provides access to the model for defensive cybersecurity work.
Launch partners include Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. Anthropic has extended access to over 40 additional organizations that build or maintain critical software infrastructure.
The program has specific constraints:
| Parameter | Detail |
|---|---|
| Access | Gated research preview — not publicly available |
| Pricing | 125 per million input/output tokens |
| Credits | Anthropic committed $100M in usage credits |
| Platforms | Claude API, Amazon Bedrock, Google Cloud Vertex AI, Microsoft Foundry |
| Use restriction | Defensive security work only |
| Sharing requirement | Participants share findings with the broader industry |
The sharing requirement is important. Glasswing is not just a sales channel for an expensive model — it is designed to produce public benefit. Vulnerabilities discovered through the program are disclosed responsibly and fixed, improving security for everyone who uses the affected software.
The Broader Implications
Mythos Preview represents a qualitative shift in what AI models can do for cybersecurity. Traditional vulnerability scanning tools operate on known patterns — they find bugs that look like bugs someone has seen before. Mythos Preview reasons about code at a level that allows it to find novel vulnerability classes that no scanner would catch.
The 27-year-old OpenBSD bug is the clearest example. That code had been reviewed by some of the most security-conscious developers in the open-source community. It passed every existing analysis tool. The fact that an AI model found it — along with thousands of similar bugs — suggests that the ceiling for automated vulnerability discovery is far higher than the current generation of security tools has reached.
For enterprises evaluating AI’s impact on their security posture, the takeaway is significant. Models at this capability level will eventually become broadly available, whether from Anthropic or competitors. Organizations that invest in AI-augmented security now — including participating in programs like Glasswing — will be better positioned when these capabilities become standard rather than exceptional.
For software developers, the message is different but equally important. Code that has passed human review and automated testing for decades is not necessarily secure. The standard for what constitutes a thorough security audit is about to change fundamentally.
Anthropic’s expanding compute infrastructure and growing enterprise customer base provide the foundation for deploying capabilities like Mythos Preview at scale. Project Glasswing is the first structured attempt to do so in a domain where the stakes — and the risks — are unusually high.
