Claude Mythos Preview and the new rules of cybersecurity
AI is reshaping cybersecurity. Anthropic's Claude Mythos Preview exposes a new era of rapid, autonomous vulnerability discovery and the governance challenges ahead.
On April 7, 2026, Anthropic unveiled Claude Mythos Preview, a powerful AI model that can autonomously identify software vulnerabilities across complex systems. What was expected to be a major industry milestone instead came with an unusual caveat: Anthropic would not release the model for public use.
Mythos can surface thousands of high-severity, zero-day vulnerabilities and generate complex, multi-step exploits. It can move through codebases, chain individual weaknesses together and build full attack sequences with far less human involvement than traditional analysis.
The decision to restrict public use reflects a deliberate tradeoff between capability and control. Instead, Anthropic is rolling Mythos out through Project Glasswing, a tightly controlled consortium that includes major tech firms, financial institutions and government stakeholders. The initiative aims to manage potential risks by using a restricted version of Mythos to identify and patch critical vulnerabilities before attackers can exploit them.
Mythos signals a broader shift in cybersecurity, as AI systems capable of both discovering and acting on vulnerabilities at scale are reshaping security workflows, blurring the line between defensive tools and offensive capability.
For security leaders, the challenge now is to keep pace with machines that can find and exploit weaknesses far faster than humans -- a shift experts think is more about speed than entirely new risks.
Mythos doesn't introduce an entirely new category of risk -- it demonstrates a step change in speed and autonomy.
Alan OsborneCISO, Paysafe
"Mythos doesn't introduce an entirely new category of risk -- it demonstrates a step change in speed and autonomy," said Alan Osborne, CISO at Paysafe, a global online payments company.
A new kind of vulnerability engine
In traditional security workflows, teams uncover vulnerabilities through manual research or bug bounty programs. AI systems such as Mythos are changing this dynamic by continuously scanning software and surfacing more issues than teams can realistically manage.
Cybersecurity has also long relied on a balance between the time it takes attackers to find weaknesses and the time it takes defenders to patch them. That balance is now tipping against defenders, as large-scale scanning risks overwhelming organizations with more critical flaws than they can fix, especially in complex, interconnected systems.
However, Osborne argued that this pressure isn't entirely new. "In many environments, that point was already reached; AI simply makes it more visible," he said, noting that most organizations already identify more vulnerabilities than they can remediate.
That raises a key question about whether the real constraint is vulnerability discovery or the ability to act on what's already known. Noah Kenney, founder and principal consultant at Digital 520, an IT services company, explained that, in most cases, fixing a vulnerability is easier than identifying one. "Improved automated detection allows teams to focus more on remediation than discovery," he said.
As discovery accelerates, the challenge shifts toward prioritization. It's no longer just about the volume of vulnerabilities uncovered, but about judgment -- identifying which vulnerabilities are truly exploitable and which fixes will meaningfully reduce risk.
This is also changing how organizations understand their attack surface. AI can now chain together vulnerabilities once considered low risk into real attacks, making traditional severity ratings less reliable.
Kenney also warned that software supply chains are becoming more exposed, as attackers can discover and exploit flaws in widely used third-party components faster than they can be publicly identified and patched.
At the same time, AI amplifies an existing imbalance: attackers need only a single weakness to succeed, while defenders must secure entire systems. By reducing the cost, time and skill needed to find vulnerabilities, AI increases the likelihood that attackers will find those entry points and exploit them.
Getting execution right
Systems like Mythos aren't simply changing the dynamics between attackers and defenders; they're widening the gap between organizations that can translate security capabilities into decisive action and those that cannot.
U.S. agencies are reportedly exploring controlled access to Mythos-level systems, while banks and financial institutions are voicing broader concerns about systemic exposure. Those concerns are not purely theoretical. Early reports indicate that a small group of unauthorized users gained access to Mythos through a third-party environment shortly after its limited release, underscoring the difficulty of fully containing systems of this capability.
But Osborne cautioned that many of these concerns are overstated, noting that much of the testing so far has taken place in controlled environments. He argued that the risk is more credible in less mature organizations -- those with slower patch cycles, weaker visibility or heavy reliance on third-party components.
"Even then, AI is more likely to act as an accelerant than a trigger," he said, noting that a true crisis would require multiple failures to align, including scalable exploitation capabilities and slow remediation.
For enterprises, the bigger issue instead is operational readiness. For example, many organizations can detect vulnerabilities but struggle to act at speed, especially in complex or legacy environments where patching can disrupt production.
As a result, leaders need to rethink vulnerability management. It can no longer be treated as routine maintenance, but as a time-critical risk function focused on the most likely and most damaging threats.
In practice, this means enabling rapid response without disrupting production environments.
The organizations that will fare best in this new landscape are those that can quickly monitor, patch and redeploy without breaking production systems.
Noah Kenney Founder and principal consultant, Digital 520
As Kenney explained, "The organizations that will fare best in this new landscape are those that can quickly monitor, patch and redeploy without breaking production systems."
A test case for AI governance
Anthropic's decision to restrict access to Mythos highlights another emerging issue: governance.
According to Osborne, limiting availability in early stages is a responsible step, but it's not a long-term solution. Similar capabilities are likely to emerge elsewhere, potentially without the same safeguards.
Over time, the focus will shift from who controls a specific tool to how organizations govern these systems broadly, including vendor responsibility, enterprise accountability and regulatory oversight.
Looking ahead, AI-driven vulnerability discovery is expected to become standard across cybersecurity within the next few years. What will change is not the nature of cyber risk, but its speed. Vulnerabilities will be found faster, exploitability will become clearer more quickly and response windows will continue to shrink.
This doesn't require a new security paradigm, but it does demand better execution of existing practices. As Osborne said, "Success will increasingly come down to speed, discipline and resilience."
Kinza Yasar is a technical writer for TechTarget's AI & Emerging Tech group and has a background in computer networking.