UPDATED 22:16 EDT / MAY 03 2026

SECURITY

AI exposes attacks traditional detection methods can’t see

Most discussions about artificial intelligence security are focused on what models might do wrong. The more urgent issue is what our detection systems still cannot see — and side-channel attacks are making that gap visible.

Side-channel attacks gather information or interferes with a program’s execution by targeting physical factors such as power consumption, electromagnetic emissions and processing time rather than targeting software code. They can exfiltrate sensitive information such as cryptographic keys by measuring coincidental hardware emissions.

Recent research has shown that an outside observer can infer the topic of an AI interaction simply by analyzing encrypted traffic patterns. No decryption required. No payload inspection. Just structure, timing and sequence. The signal exists, but it lives outside the content security tools that were designed to inspect.

This is not just a novel attack technique. It is evidence of a broader failure in the design of detection. We built security architectures to match indicators. Increasingly, the most important signals do not present as indicators at all.

Where rules fail

For the past two decades, detection has been defined by rules. Signatures, thresholds, known patterns and anomaly baselines have formed the backbone of security operations. The industry has gotten good at creating more rules, better rules and now AI systems that help write and tune them faster.

But none of that changes the underlying constraint. Rules require something discrete to match: a known artifact, a recognizable deviation or a boundary crossed.

Side-channel attacks don’t provide that. Neither do many modern intrusions. An attacker operating through encrypted channels, legitimate tools or AI-assisted workflows can move through an environment without ever triggering a condition that a rule can evaluate. The activity is valid at every individual step. The pattern becomes visible only when you look at how those steps connect over time.

That is the detection gap. It is not a matter of coverage but an architectural limitation. A whole class of attacker behavior produces no alert because it produces no matchable signal.

Gap is growing

The practical consequence of this gap is straightforward. There are scenarios in which attackers operate within an environment, and the security team receives no signal at all. Not a low-confidence alert. Nothing to investigate.

Side-channel attacks are one example of this. The data is present, but it sits in timing, sequencing and interaction patterns that traditional tools are not designed to interpret. The same is true for low-and-slow intrusions, living off the land techniques and AI-assisted attack chains that adapt as they move.

As organizations expand their use of AI, both in business operations and in attack tooling, the proportion of activity that falls into this gap increases.

At the same time, most security investments continue to focus on optimizing what is already covered: faster rule creation, better tuning and more efficient alert triage. These improvements matter, but they do not address the portion of the attack surface that produces no alert in the first place.

AI at the wrong layer

A significant amount of AI is being deployed across security operations today. Much of it is valuable in helping summarize alerts, accelerating investigations and reducing the operational burden on analysts.

But most of these systems are applied after a detection has already occurred. They improve response. They do not fundamentally change how detection works. This distinction matters.

If a class of attacker behavior doesn’t generate an alert, then no amount of automation, summarization or prioritization will surface it. Side-channel attacks reinforce this point. The signal exists, but it is not expressed in a way that rule-based systems or post-detection AI can process. The same dynamic applies to any attack that unfolds through legitimate actions, encrypted channels or gradual progression.

The industry is investing heavily in making detection workflows more efficient. It is investing far less in expanding what detection can actually observe.

Behavior, not just events

Closing this gap requires a different approach to detection, one that doesn’t depend on predefined indicators or human-authored rules.
The signals security teams need to detect these new attacks already exist. The sequence of actions, the relationships between systems and the timing and progression of behavior reveal intent in a way that individual events cannot.

Ironically, many of the same deep learning approaches that make for effective side-channel attacks can also be used to see traffic patterns that determine whether they contain attacks.

An attacker staging lateral movement through encrypted channels leaves a trace, not in the content of the traffic, but in how access patterns evolve. A side-channel leak doesn’t expose data directly, but it exposes structure. The same principle applies across modern attack techniques.

To read that signal, detection systems must operate on behavioral sequences rather than isolated events. They must evaluate whether the activity aligns with how systems are expected to function over time, not just whether a single action appears anomalous. This is a different category of problem than rule optimization. It requires models that can learn from structured operational data and identify patterns that were never explicitly defined in advance.

For security leaders evaluating AI investments, this is a useful distinction. Some systems make existing detection workflows more efficient. Others expand the detection surface itself by identifying behavior that rules cannot express. Both have value, but they solve fundamentally different problems.

Rethinking detection

For most organizations, the first step is not adding new tools; it is developing a more accurate understanding of what their current detection strategy actually covers.

That starts with an honest assessment of visibility. That’s not just whether a rule exists for a given technique, but whether the system can reliably detect it under realistic conditions. Early-stage reconnaissance, subtle lateral movement and activity that blends into normal operations are often where the largest gaps appear.

It also requires examining how detection is performed. If the entire stack is built around matching individual events or predefined indicators, then the limitation is structural. Improving rule quality will not resolve it.

When evaluating AI-driven security capabilities, the key question is straightforward: Does this system detect behavior that cannot be captured in a rule, or does it make rule-based detection more efficient? That distinction is critical for making informed investment decisions.

Closing the detection gap addresses more than response times. It changes when organizations become aware that something is wrong.

Earlier detection reduces dwell time, limits the scope of incidents and gives defenders the opportunity to act before attackers achieve their objectives. It also provides a more accurate picture of actual risk exposure. Many organizations operate with an inflated sense of visibility because their tools perform well within a constrained detection model.

Side-channel attacks are a useful signal in their own right. They demonstrate that meaningful information can exist outside the boundaries of what traditional systems are designed to inspect. More importantly, they highlight how much of that information is currently being ignored.
AI did not introduce this problem. It exposed it.

The organizations that adapt won’t be the ones that simply move faster within existing detection models but expand what detection can see in the first place.

Evan Powell is chief executive officer of DeepTempo, the business name of Skidaway Inc. He wrote this article for SiliconANGLE.

Image: SiliconANGLE/DALL-E 3

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.