UPDATED 22:58 EDT / APRIL 20 2026

Jacob Klein-Anthropic-SANS Cybersecurity Summit SECURITY

Mythos remains a mystery as security world faces rising threats, agentic attacks and concerns about AI integrity

Anthropic PBC’s Claude Mythos model has emerged as the most widely discussed artificial intelligence solution without being fully released.

Information about the model, which reportedly has the ability to analyze software at large scale, find bugs in hardened software ecosystems, and identify vulnerabilities, has been tightly controlled by Anthropic. That situation did not change much on Monday when Anthropic Head of Threat Intelligence Jacob Klein (pictured) spoke at the SANS Cybersecurity Summit in a hotel just outside of Washington D.C., although he did provide a hint of the model’s capabilities during his appearance.

Klein offered a brief description of the model’s power in the context of how rapidly AI has changed the cybersecurity world and vowed transparency in the months ahead.

“It’s very good at finding vulnerabilities and chaining them together for an exploit,” Klein told the group. “You have to rethink what your risk picture looks like now. The landscape has changed today. There is a trade-off that we have to balance out. We will be transparent, and I would hope that the other labs will have the same level of visibility.”

Breaches are accelerating

Klein’s appearance at the SANS Institute’s gathering comes at a time when the pace of AI-related breaches has picked up dramatically. Over the past weekend, cloud development platform Vercel Inc. disclosed that its internal systems had been breached through a compromise of Context.ai, a third-party tool used by a Vercel employee.

Hackers have since claimed to have stolen customer credentials from Vercel and have made the data available for sale online. This followed a report earlier this month that a North Korean threat actor inserted malicious code into the widely used Java script library Axios, as adversaries have used AI to probe every link in the supply chain.

Events such as these and the discussion surrounding Mythos prompted a meeting at the White House between the Treasury Secretary and the chief executive of Anthropic late last week. This weekend, the Financial Times reported that major banks are strengthening their defenses against a rising number of cyberattacks.

“The capabilities of AI are increasing the scale of the attack surface that attackers have available to them,” Klein said.

Moving at machine speed

Anthropic’s head of threat intelligence presented a brief history of how the Claude AI model has been adopted by malicious actors. It illustrated how fast the cyberthreat landscape has evolved.

The company initially saw evidence of Claude’s use in the spring of 2025 when a lone actor used the model to build a fairly unsophisticated ransomware attack. Two months later, Anthropic discovered a Russian cybercriminal who employed Claude to conduct an extortion operation. By September 2025, the company had evidence that a state-sponsored group in China was using Claude for system reconnaissance, penetration testing at scale, exploitation, access and then lateral movement within a breached network.

Klein noted that the goal in the Chinese example was espionage and exfiltration of data, with 80% to 90% of the actions driven autonomously.

“Once it was built, it was fairly easy,” Klein said. “Mostly it’s Claude itself just taking actions. The human here has become the supervisor.”

Much as popular AI models have enabled well-meaning non-programmers to build agents that perform tasks at lightning speeds, Anthropic’s research highlights how threat actors are following the same playbook to build tools for exploitation that they cannot create on their own. The company has mapped 800 bad actors against MITRE techniques to gain a better picture of how adversaries are using AI to circumvent defenses and a report should be available soon, according to Klein.

“At this point AI systems are becoming a core piece of architecture for bad actors,” Klein said. “My job is to find bad actors and understand what they’re doing.”

Building architecture for stronger defense

Klein’s point about AI systems becoming a key piece of architecture for threat actors highlights a significant development in how rapidly the cyber threat landscape is shifting. Mythos could represent the kind of architecture or scaffolding needed to defend successfully against AI-related attacks according to one leading security researcher.

Knostic co-founder Sounil Yu discussed the latest AI threats during the SANS Cybersecurity Summit.

Knostic co-founder Sounil Yu discussed the latest AI threats during the SANS Cybersecurity Summit.

Speaking at the SANS Summit, Knostic Inc. co-founder and Chief AI Security Officer Sounil Yu used the analogy of the “big bad wolf” blowing down the “three little pigs” house made only of straw.

“Most think we should build with bricks, instead we should focus on the notion of architecture,” Yu told the SANS gathering. “Architecture sometimes matters more than just the materials.”

Development of Mythos-like tools that can bolster cyber defenses and create sturdy architecture has taken on more urgency in recent months with the growing adoption of AI agents. The most prominent example of this dynamic has been OpenClaw, a highly popular open-source personal AI assistant that has notoriously weak security controls.

Nvidia Corp., Cisco Systems Inc. and Knostic have all released security-strengthened versions of OpenClaw in an effort to keep the tool from opening new vulnerabilities in enterprise organizations.

“The Claw has already left the tank, and you probably already have it running in your organization,” Yu noted. “Unfortunately, OpenClaw by default is right in the danger zone, it pulls in skills from who knows where. OpenClaw is really just a wakeup call to a lot of enterprises.”

Call for integrity

That wakeup call is also leading some prominent voices in the cybersecurity world to issue a warning about AI’s journey down a road lacking integrity. As AI takes over the world, can it be trusted?

This is the dilemma that the cybersecurity community must confront, according to Bruce Schneier, previously a faculty affiliate at the Harvard Kennedy School and currently an adjunct professor at the University of Toronto. Schneier expressed concern that the current lack of guardrails around AI usage and the motivations of nation states could result in far more dangerous outcomes on the world stage.

“We are already seeing Russian attacks to manipulate training data,” Schneier said during a presentation. “Imagine AI being used as an advisor in international trade negotiations. There is going to be an economic incentive to hack that AI. We need trustworthy AI.”

Schneier said this can realistically be accomplished only by government intervention, through transparency laws and regulation of AI and robotic safety. He made the point that a focus on AI integrity will be a critical mandate for security professionals at a time when AI is becoming increasingly viewed as a trusted adviser and agentic employee.

“I predict that integrity is the key security problem of the next decade,” Schneier said. “Our confusion will increase with AI. We are going to think of AI as a friend, when it is not.”

Photos: SANS Institute/livestream

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.