Sponsored

When Anthropic accidentally exposed roughly 3,000 internal documents through a content management system misconfiguration in late March, the world got its first look at Claude Mythos 5 — and the implications have reverberated across the AI industry ever since. The model, internally codenamed “Capybara,” is the first AI system to cross the 10-trillion-parameter threshold, placing it in a category of its own.

A New Tier of Scale

The raw numbers are staggering. Claude Mythos 5 is built on a refined Mixture of Experts (MoE) architecture, meaning that while the total parameter count sits at 10 trillion, only an estimated 800 billion to 1.2 trillion parameters are active per forward pass. In practical terms, the model carries the knowledge capacity of a 10-trillion-parameter dense system while keeping inference costs closer to a 1-trillion-parameter model.

For context, OpenAI’s GPT-5 and Google’s Gemini Ultra — the previous generation of frontier models — are believed to operate in the 1–2 trillion active parameter range. Mythos 5 represents a leap that independent researchers are describing as the most significant scaling milestone since GPT-4.

Anthropic has confirmed the model’s existence but has not released official benchmarks, a system card, or made it publicly available. The company cited the need for “efficiency improvements and responsible rollout” — language that takes on particular weight given what the leaked documents revealed about the model’s capabilities.

Cybersecurity: A Double-Edged Breakthrough

The most striking — and alarming — aspect of Mythos 5 is its performance on cybersecurity tasks. According to Anthropic’s own draft documentation, the model is capable of identifying and exploiting zero-day vulnerabilities across every major operating system and every major web browser when directed by a user to do so. The vulnerabilities it surfaces are often subtle, and the oldest confirmed discovery so far was a now-patched 27-year-old bug in OpenBSD.

This has prompted a controlled early-access program focused specifically on defensive cybersecurity. Select enterprise customers — primarily in critical infrastructure and financial services — are testing the model under strict conditions. Anthropic’s red team has been running structured evaluations to determine the boundaries of what the system can and cannot be permitted to do at general release.

The situation puts Anthropic in an uncomfortable but increasingly familiar position: the company that built the industry’s most articulate case for responsible AI development now holds the most powerful, and potentially most dangerous, model in existence.

What This Means for the Market

The competitive pressure is immediate. OpenAI, Google DeepMind, and Meta’s FAIR lab are all understood to have large-scale MoE projects in advanced development, but none has publicly crossed the 10-trillion mark. If Mythos 5’s capabilities hold up under independent evaluation, it will reset the benchmark bar across reasoning, coding, scientific research, and now offensive security tasks.

For enterprises, the calculus is straightforward: whoever gets early access to Mythos 5 for defensive security applications gains a meaningful advantage in threat detection. For regulators, the story is more complicated. The EU AI Act’s framework for high-risk AI systems was not designed with models capable of finding 27-year-old zero-days in mind — and that gap is about to become very visible.

Anthropic says a broader release timeline will be announced once safety evaluations are complete. Given what’s already leaked, the industry will be watching closely.

L
Lois Lane

Contributing writer at Clarqo, covering technology, AI, and the digital economy.