UPDATED 08:00 EDT / FEBRUARY 14 2024


New report finds sensitive information at risk in 55% of generative AI inputs

A new report released today by cloud security startup Menlo Security Inc. finds that 55% of all generative artificial intelligence inputs contain sensitive and personally identifiable information.

The finding was one of various findings in Menlo’s The Continued Impact of Generative AI on Security Posture report, which analyzed the changing behavior of employee usage of generative AI and the subsequent security risks these behaviors pose to organizations.

In the last thirty days, more than half of data loss prevention or DLP events detected by Menlo Security included attempts to input personally identifiable information. The next most common data type that triggered DLP detections included confidential documents, representing 40% of input attempts.

The report details how, from July to December last year, the market and nature of generative AI usage have transformed considerably. New platforms and features have become increasingly popular, but at the same time have introduced new cybersecurity risks within enterprises.

In one example in the report, there was an 80% increase in attempted file uploads to generative AI websites. Menlo Security’s researchers believe the increase is partly the result of the many AI platforms that have added file upload features within the past six months. The introduction of file upload features in publicly available generative AI models has been quickly taken advantage of.

Although users are more likely now to upload a file, copy-and-paste attempts to generative AI sites only decreased minimally at the same time and are still a frequent occurrence. The two generative AI uses present the largest impact on data loss given the ease and speed at which data could be uploaded and inputted, including source code, customer lists, roadmap plans or personally identifiable information.

On the bright side, enterprises were found to be recognizing the risk and increasingly focusing on securing against data loss and data leakage resulting from generative AI usage. In the last six months, the Menlo Labs Threat Research team discovered a 26% increase in organizational security policies for generative AI sites. However, the majority are doing so on an application-by-application basis rather than by establishing policies across generative AI applications as a whole.

The report argues that if policies are applied on an application-by-application basis, organizations must either constantly update their application list or risk gaps in safeguards to generative AI sites that employees use. Doing so requires a need for a scalable and efficient way to monitor employee behavior, adapt to evolving functionalities introduced by generative AI platforms, and address the resulting cybersecurity risks.

Other findings in the report include that 92% of organizations implementing security policies on an individual application basis have enacted measures focused on generative AI, whereas 8% allow unrestricted use. Among those applying group-level security policies to generative AI applications, 79% enforce security-focused policies, with 21% allowing free use.

The report also found that file uploads to generative AI sites are 70% higher when considering generative AI as a whole category rather than focusing on the main six sites. That they do underscores the limitations and challenges of maintaining effective security through application-specific policies and the broader issue of data vulnerability within the evolving landscape of generative AI.

“While we’ve seen a commendable reduction in copy and paste attempts in the last six months, the dramatic rise of file uploads poses a new and significant risk,” said Menlo Security Chief Marketing Officer Pejman Roshan. “Organizations must adopt comprehensive, group-level security policies to effectively eliminate the risk of data exposure on these sites.”

Image: Menlo Security

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy