UPDATED 15:03 EDT / AUGUST 21 2025

SECURITY

Menlo Security research finds use of shadow AI is booming

As generative artificial intelligence tools are becoming more prevalent in the workplace, employees are accessing these tools via personal accounts on company devices, pasting in sensitive data, and downloading content — all of which creates potential security risks. Meanwhile, cybercriminals are capitalizing on this trend by weaponizing AI and impersonating trusted tools.

Menlo Security Inc. recently released a new report that takes a closer look at how gen AI is shaping today’s workplace. The data was collected over 30 days (May-June 2025) using Menlo’s telemetry. During this period, web traffic and gen AI interactions were analyzed from hundreds of global organizations. Since most gen AI tools are accessed via a browser, Menlo was able to observe browser traffic to gen AI sites and regional adoption trends.

To frame its findings in a broader context, Menlo also cites Similarweb data showing that between February 2024 and January 2025, traffic to gen AI sites jumped from 7 billion visits to more than 10.5 billion visits. That’s a 50% increase in less than a year.

About 80% of gen AI use still happens in the browser, a convenient option for most users because it works across virtually all devices and operating systems. ChatGPT, unsurprisingly, tops the list. It now has about 400 million weekly users. Yet the vast majority, 95%, are on the free tier.

The benefit of the free tier is that its free, but as the saying goes, you don’t get what you don’t pay for. The advanced tier uses better models and gives more accurate responses, which is important in a business context. Also, OpenAI’s privacy policy states it may use the data provided to train its models. Users can opt out of this, but many shadow AI users may not be aware of this. For business or sensitive data, using a paid tier such as ChatGPT Enterprise or the API ensures your data is not used for training models by default.

There’s no doubt that gen AI adoption has skyrocketed globally. While the Americas saw the most total traffic, gen AI use is growing fastest in the Asia-Pacific. In China, 75% of organizations are implementing gen AI in some way. Nearly as many, 73%, are doing the same in India. However, Europe and the Middle East are adopting gen AI more slowly, which is attributed to stricter data protection laws and regulatory frameworks.

Given the popularity of gen AI tools, organizations are increasingly seeing them in the workplace. According to a TELUS Digital survey cited in Menlo’s report, 68% of employees are using public tools such as ChatGPT through personal accounts. What’s even more concerning: Fifty-seven percent admitted to pasting sensitive company information in these tools. In just one month, Menlo observed more than 155,000 copy attempts and more than 313,000 paste attempts involving gen AI.

Many organizations flagged this content as sensitive or restricted, including personal information, financial data, login credentials and intellectual property. Employees may unintentionally leak data while using gen AI to summarize a report or write an email, according to Menlo. But sharing information isn’t the only problem. Employees download PDFs and text files from gen AI tools, which may have embedded malware or phishing links.

It’s also becoming more difficult to distinguish between legitimate and fake AI tools, with malicious browser extensions on the rise. Menlo tracked nearly 600 phishing sites pretending to be legitimate gen AI, often masking themselves as ChatGPT or Copilot in their domain names. Between December 2024 and February 2025, researchers tracked more than 2,600 lookalike domain names and impersonation websites.

Cybercriminals are jumping on the bandwagon like everyone else, using gen AI to make their phishing attacks more convincing and tailored to specific individuals. For example, they’re combining AI-written phishing emails with other tactics that exploit browser flaws. This has resulted in a 130% year-over-year increase in zero-hour phishing attacks, which hit before security systems know they exist.

The use of “shadow” tools with workers is nothing new and should not be a surprise with gen AI. Since users have computers the use of consumer grade tools has been the norm. Mobile devices, internet accounts, e-mail, cloud are just a few examples. When workers have a way of making their lives easier, they will use whatever tools they have at their disposal.

If the company does not give them a viable option, that’s when the use of “shadow” apps and tools boom. Right now with AI, many companies are reviewing policies and trying to determine the best path forward while the report clearly states users are charging ahead.

Going forward, organizations need to take control of how gen AI is used. Menlo stresses the importance of eliminating shadow AI by limiting access to consumer-facing gen AI tools via personal accounts in the workplace. Organizations should make approved AI tools the only ones employees are allowed to use. On top of that, they should enforce data loss prevention policies to restrict actions such as copy/paste, file uploads and downloads. DLP is necessary to apply the right level of protection.

Menlo also recommends inspecting gen AI browser traffic and focusing closely on high-risk file types such as PDFs and DOCX. The files may appear harmless, but they often hide malware or phishing links. Adopting zero-trust security, particularly on unmanaged devices used by contractors and third parties, is another important safeguard. With zero-trust security, organizations can verify every user and device before granting them access to the corporate network.

Finally, Menlo emphasizes educating users about the risks of public gen AI tools. Once employees turn to tools outside the information technology department’s control, it becomes easy for sensitive company data to end up in the hands of cybercriminals. It’s impossible to ban gen AI use completely in the workplace due to its popularity. However, if employees understand the risks and use only company-approved tools, organizations can create a work environment where gen AI is helpful instead of harmful.

Although the use of alternate tools is not new, it has come to AI faster than other technologies I have seen. IT leaders need to get out in front of this and ensure the proper controls and safeguards are in place before employees unknowingly put company data at risk.

Zeus Kerravala is a principal analyst at ZK Research, a division of Kerravala Consulting. He wrote this article for SiliconANGLE.

Image: SiliconANGLE/Microsoft Designer

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.