← Back to feed
Cybersecurity

Anthropic unveils Project Glasswing — Claude Mythos already found "thousands" of zero-days in major software

Anthropic launched Project Glasswing on April 7 alongside AWS, Apple, Cisco, Google and Microsoft: a closed program distributing a restricted preview of Claude Mythos — a frontier model Anthropic says has already identified thousands of high-severity zero-day vulnerabilities across every major OS and browser. Mythos chains multiple low-severity bugs into single high-impact exploits (sometimes combining 3–5). Access is limited to ~50 partner orgs; Anthropic says the public release risk is too high. Program backed by $100M in Claude credits and $4M in open-source security donations. Sets the template for "AI that is too dangerous to ship".

AnthropicClaude MythosZero-DayProject GlasswingAI Safety

Why it matters

If Mythos really is finding zero-days at the claimed scale, the offense-defense balance in software security shifts materially within months. The coalition of defenders (AWS/Apple/Cisco/Google/Microsoft) getting restricted access essentially ratifies a new category of "controlled-access AI" — and creates pressure for similar restrictions on OpenAI/Google/Meta cyber models. Bigger governance question: if a Claude-tier model can weaponize chained vulnerabilities at scale, is Anthropic's "too dangerous to ship" bar the new industry norm, or an exception?

Impact scorecard

8.5/10
Stakes
9.5
Novelty
9.0
Authority
9.5
Coverage
9.0
Concreteness
8.5
Social
8.5
FUD risk
3.0
Coverage35 outlets · 9 tier-1
Anthropic blog, TechCrunch, Fortune, VentureBeat, CyberScoop, NPR, …
X / Twitter18,000 mentions
@AnthropicAI · 12,000 likes
@simonw · 8,400 likes
Reddit4,800 upvotes
r/netsec
r/netsec, r/cybersecurity, r/MachineLearning

Trust check

medium

First-party Anthropic announcement with partner confirmations from named Fortune-10 companies, plus independent coverage from NPR, TechCrunch, VentureBeat, Fortune. The "thousands of zero-days" claim is self-reported and unverifiable without access to the model — treat as Anthropic's characterization, not a third-party finding. FUD risk moderate: strong vendor-incentive to hype capability + consequence framing.

Primary source ↗