SECURITY
SECURITY
SECURITY
A new report out today from Cato Networks Ltd.’s Cato CTRL threat research team details a newly discovered indirect prompt injection technique that can manipulate artificial intelligence browser assistants through legitimate websites.
Dubbed “HashJack,” the technique is described as the first known method that weaponizes any normal URL by hiding malicious prompts after the “#” symbol. The ability to do so allows attackers to influence AI assistants embedded in browsers such as Perplexity AI Inc.’s Comet, Microsoft Corp.’s Copilot for Edge and Google LLC’s Gemini for Chrome.
HashJack works by embedding hidden instructions inside a URL fragment, a part of the URL address that never leaves the client browser and is not logged or inspected by web servers or network tools.
When an AI browser loads the page and the user asks a related question, the AI assistant incorporates the hidden fragment into its context window, treating it as part of the page content. The hidden fragment can trigger the assistant to generate misleading guidance, fabricate links, send users to attacker-controlled pages or, in the case of agentic AI such as Comet, execute autonomous actions such as background data fetches to malicious endpoints.
The research outlines six attack scenarios that the technique enables, including callback phishing, data exfiltration, misinformation, malware guidance, medical-related harm and credential theft.
In testing, Perplexity’s Comet browser proved most vulnerable to HashJack, as its agentic capabilities allowed the assistant to act on the hidden instructions automatically, including sending user context such as account details or email addresses to attacker servers.
Copilot for Edge and Gemini for Chrome also showed exploitable behaviors, though both applied some gating or link rewriting that reduced, but did not eliminate, the risk.
Before going public with the details, the Cato CTRL threat research team reported its findings to Perplexity, Microsoft and Google over the past several months with mixed responses.
The team said Perplexity triaged the issue as critical and applied a fix in November. Microsoft confirmed the behavior and implemented a fix in late October and emphasized its broader defense-in-depth strategy for indirect prompt injection. Google, however, classified the behavior as intended and marked it “Won’t Fix,” leaving Gemini for Chrome susceptible.
The researchers argue that HashJack underscores a broader design flaw emerging in AI browsers, which routinely pass full URLs to their embedded assistants without sanitizing fragments. Because users see a trusted website and rely heavily on AI assistants for guidance, the output can appear legitimate even when it is being secretly manipulated.
“Cato CTRL’s findings highlight the urgent need for security frameworks that address both prompt injection risks and weaknesses in AI browser design,” the report concludes. “As AI browser assistants gain access to sensitive data and system controls, the risk of context manipulation will only grow. AI browser vendors and security experts must act now, before widespread adoption makes these attacks inevitable in the real world.”
The report comes a week after browser security company SquareX Ltd. warned of a hidden application programming interface in Perplexity’s Comet browser that allows extensions in the AI browser to execute local commands and gain full control over users’ devices.
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.