SECURITY
SECURITY
SECURITY
On April 7, 2026, Anthropic did something unprecedented in the history of artificial intelligence: The company announced that it had built its most capable model ever and would not be releasing it to the public.
The model had not failed. In fact, it had performed so well, across such consequential domains, that Anthropic concluded the constraint infrastructure required to deploy it responsibly did not yet exist.
In the weeks of testing before the announcement, Claude Mythos Preview had identified critical vulnerabilities in every major operating system and every major web browser – thousands of flaws that had survived, in some cases, decades of human review and millions of automated security tests. The same capability that made it an extraordinary defensive tool made it, in the wrong hands, a means to compromise virtually any major software system in the world.
Anthropic’s response was Project Glasswing: a consortium of 50 of the leading technology and critical infrastructure organizations committed to finding and patching vulnerabilities before the capability proliferated beyond responsible actors. The company was explicit about why Mythos itself would remain unreleased: “We need to make progress in developing cybersecurity and other safeguards that detect and block the model’s most dangerous outputs.” The most safety-focused AI laboratory in the world had built a system it could not yet safely constrain, so it paused.
For many organizations deploying AI, that question comes later – if it comes at all.
Human beings do not require external governance to prevent the most harmful behaviors. We are constrained from within by biology, social accountability, legal consequence and the cognitive limits that prevent any individual from optimizing at machine speed and scale. These constraints were not designed; they emerged over millennia. They are imperfect, but they exist as a baseline.
AI systems inherit none of these. Every limit is one someone chose to engineer. An AI system given an objective will pursue it through whatever path is mathematically available – including those that involve collusion, discriminatory outcomes, unauthorized resource acquisition, or, as Mythos Preview demonstrated, the autonomous exploitation of critical infrastructure vulnerabilities. It’s not because the system is malicious, but because nothing was in place to prevent it.
This is not a flaw. It is the nature of these systems, and it is the central governance challenge every organization deploying AI faces today.
A mature AI governance program looks like other rigorous organizational disciplines such as DevSecOps, regulatory compliance and financial controls. It inventories every AI system in production, assesses it against a proportional set of technical, operational and governance controls, measures the gap between what is prescribed and what is actually implemented and reviews that gap on a defined schedule as systems and their environments evolve. It is systematic, documented and auditable – not a policy document, but a practice.
That standard exists in other domains because those domains built it over decades of incidents, regulation, and accumulated institutional knowledge. AI governance is only a few years into that same process. Most organizations have not yet had the time, the mandate or the forcing function to develop their AI governance to the same level of rigor as the compliance and security practices they have spent years maturing.
Competitive pressure compounds the problem. With so much market uncertainty and a regulatory environment still taking shape, many organizations are moving faster than their governance programs can keep pace. We are seeing the maturity of the industry, its standards and its regulations built in real time.
The most important lesson of the Glasswing announcement is about sequence. Anthropic did not build Mythos Preview and then ask whether it was safe to release. The company evaluated the system’s capabilities rigorously, concluded that the constraint infrastructure didn’t exist to deploy it responsibly and chose to withhold it from the public. The governance question came before the deployment decision.
Unfortunately, that sequence is more often the exception than the rule in businesses, due to market forces that reward speed and a governance ecosystem that has not yet caught up.
Writing on the day of the announcement, New York Times columnist Thomas Friedman called what Mythos Preview represents potentially as consequential as the emergence of nuclear weapons and the need for nonproliferation, a capability no single organization or country can manage alone. He is not wrong, but the civilizational scale does not excuse the organizational one. Every organization deploying AI systems today faces a version of the same question Anthropic answered with Mythos: Is the constraint infrastructure adequate relative to the capability being deployed?
Many organizations do not yet have a reliable answer. That’s not from indifference, but because the frameworks, standards and regulatory guidance needed to make that evaluation with confidence are still being developed.
Project Glasswing is a beginning, involving multiple organizations, a defensive mandate and a $100 million commitment applied to a specific threat. It is not a solution to the broader challenge it has illuminated.
That challenge belongs to every organization that builds or deploys AI. Treat constraint adequacy as a deployment prerequisite, not a post-deployment remediation task. Measure the gap between what governance documents say and what AI systems actually do. Recognize that as AI capability advances, the constraint systems designed for current capabilities require continuous reassessment.
Anthropic’s choice demonstrated something rare: the discipline to ask the governance question honestly and act on the answer, even when the answer was inconvenient.
The organizations that will be on the right side of AI’s history are the ones asking that question now – before the incident that makes the answer undeniable.
John Waller is risk advisory practice lead at managed security services firm UltraViolet Cyber Inc. He wrote this article for SiliconANGLE.
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.