

A new report out today from Aim Security Ltd. reveals the first known zero-click artificial intelligence vulnerability that could have allowed attackers to exfiltrate sensitive internal data without any user interaction.
The vulnerability, dubbed “EchoLeak,” was found in Microsoft Corp.’s 365 Copilot generative AI tool in January and reported to Microsoft at the time. Aim has come forward with the details only now that the vulnerability has been addressed.
The vulnerability involved what Aim describes as an “LLM Scope Violation,” referring to scenarios where a large language model can be manipulated into leaking information beyond its intended context. In the case of the EchoLeak vulnerability, it involved crafting a malicious email containing specific markdown syntax that could slip past Microsoft’s Cross-Prompt Injection Attack defenses.
The markdown in the malicious email utilizes reference-style image and link formats to bypass Copilot’s sanitization filters, ensuring the payload is preserved when the AI assistant retrieves and processes the email.
From there, the exploit could make use of Microsoft’s own trusted domains, including SharePoint and Teams, which are whitelisted under Copilot’s content security policies. The domains can be used to embed external links or images that, when rendered by Copilot, automatically issue outbound requests. The attackers can redirect the content to a server they control by crafting these references to include sensitive data retrieved from Copilot’s context.
Critically, according to Aim’s researchers, all of this happens behind the scenes. Users themselves don’t have to open the email or click on anything with Copilot’s automated processing being enough to trigger the entire chain, hence the zero-click designation for EchoLeak.
Aim released a working proof-of-concept showing that data such as internal memos, strategic documents or even personal identifiers could be leaked without any visual indication to the user or system administrators. Microsoft, in response, has acknowledged the issue but did note that it has found no evidence of the vulnerability being exploited in the wild.
While it’s positive that the vulnerability wasn’t exploited in the wild, the fact that AI services can be vulnerable to zero-click attacks opens a Pandora’s Box of future risk, be it that some cybersecurity experts are not surprised by the methodology’s emergence.
“If you didn’t expect something like this to happen, you haven’t been paying attention,” Tim Erlin, security strategist at application programming interface security firm Wallarm Inc., told SiliconANGLE via email.
“While the specific technique might not have been predictable, the idea that researchers wouldn’t find some kind of meaningful, novel exploit for the ever-expanding AI attack surface is ridiculous,” explained Erlin. “It was bound to happen. Microsoft and the researchers appear to have handled this one well, with responsible disclosure and a fix.”
Ensar Seker, chief information security officer at extended threat intelligence company SOCRadar Cyber Threat Intelligence Inc., warns that the disclosure has “serious implications for NATO, government, defense, healthcare and anyone using enterprise AI assistants: attackers no longer need to compromise user credentials or rely on phishing. They can manipulate a trusted AI interface directly.”
“What stands out especially is that this isn’t limited to Copilot,” he added. “As Aim Labs warns, any RAG-based agent that processes untrusted inputs alongside internal data is vulnerable to scope violations. This signals a broader architectural flaw across the AI assistant space — one that demands runtime guardrails, stricter input scoping and inflexible separation between trusted and untrusted content.”
THANK YOU