Anthropic announces its most powerful AI and decides not to release it. At least, not to everyone.
Anthropic has done something rare in the history of technology: it announced a product and, almost simultaneously, decided not to release it to the public.

Anthropic has done something rare in the history of technology: it announced a product and, almost simultaneously, decided not to release it to the public.
The new artificial intelligence model, Claude Mythos, has been described by the company itself as its most capable system ever developed. Precisely for this reason, however, it has remained out of reach for the public and accessible only to a small circle of selected partners.
The reason for the restriction is as fascinating as it is unsettling.
In internal tests, Mythos reportedly demonstrated cybersecurity analysis capabilities beyond current standards, identifying and chaining together thousands of critical vulnerabilities across widely used operating systems and browsers. According to Anthropic, if used offensively, these capabilities could be potentially catastrophic. Hence the decision: no global launch, but extremely controlled access.
Through the Glasswing project, the company granted access to the model to a very limited number of organizations, including CrowdStrike, Microsoft, Apple, and Google, with the sole objective of strengthening defensive cybersecurity.
The idea is simple: use Mythos to discover and fix vulnerabilities before malicious actors can exploit them.
The decision immediately sparked debate. The Economist devoted a cover story to the topic, while The Guardian highlighted how these concerns arise at a time of significant regulatory uncertainty, with the administration of Donald Trump viewed as less inclined toward AI regulation compared to other international contexts.
Meanwhile, Anthropic’s growth continues. According to recent estimates, the company’s annual revenue in 2026 has surpassed $30 billion, tripling from the previous year.
But beyond the numbers, Mythos represents more than just a model.
It signals a turning point: artificial intelligence has reached a stage where the question is no longer simply “what can it do?” but “what should we allow it to do?”