UPDATED 10:00 EDT / JANUARY 24 2024

AI

Prompt Security raises $5M to ensure LLMs don’t give up corporate secrets

Enterprise-focused generative artificial intelligence security startup Prompt Security Inc. said today it’s launching with $5 million in seed funding.

The round was led by Hetz Ventures and saw participation from Four Rivers, plus prominent angel investors such as the chief information security officers of Airbnb Inc., Elastic N.V. and Dolby Laboratories Inc.

The startup has created a unique security tool that’s being used by a number of high-profile companies to protect their applications, employees and customers from the threats associated with generative AI.

The company’s software enables each prompt and the consequent response from an AI model to be inspected, in order to prevent the exposure of sensitive data, block harmful content from being generated, and protect against other kinds of attacks. It says these are much-needed capabilities, given the worrying prevalence of so-called “AI hallucinations,” which is when generative AI models fabricate responses to prompts when they’re unsure of the true answer.

Recent research by Google LLC, for example, shows that LLMs such as ChatGPT can be induced to reveal large amounts of data that they were trained upon. Meanwhile, The New York Times has filed a lawsuit against ChatGPT’s creator OpenAI and Microsoft Corp., saying that their popular model can output almost verbatim article excerpts in its responses to users’ prompts.

Prompt Security co-founder and Chief Executive Itamar Golan says his company is focused on countering two of the biggest threats associated with generative AI prompts, including shadow AI and data leakage, and jailbreaks and prompt injection attacks. He explained that the first relates to the danger of generative AI tools being used without the knowledge of corporate security teams, which creates opportunities for data exfiltration and the exposure of critical company assets and intellectual property.

“Once sensitive data from the organization is being streamed to these GenAI tools, there’s a significant probability that this data will be used for future training of the LLMs and potentially be generated by these tools on external endpoints,” Golan said.

The latter threat involves companies that build consumer-facing applications with generative AI capabilities. “A malicious actor could craft a prompt, not necessarily too sophisticated, and it might expose data or cause the model to respond in inappropriate ways, leading to reputational damage,” Golan added.

The CEO pointed out that employees often have strong incentives to share sensitive enterprise data with generative AI tools, since they can make their lives significantly easier. However, when they do this, it can easily result in data being leaked.

“It opens them up to a host of security challenges, including models being manipulated by bad actors, and content being generated that is unsafe or infringes on copyright,” Golan continued. “Yet despite all the risks, gen AI unlocks immense value, and adopting it isn’t a matter of choice — it’s key to business survival.”

Prompt Security’s tools are aimed at mitigating these kinds of threats, and the company says they can be deployed by almost any organization in a matter of minutes. They come with extensions for all major browsers, and can also secure applications through a developer-focused software development kit.

The startup said its tools can inspect semantic data, looking at each prompt and the model’s response to it, in order to protect against various kinds of threats. They also provide visibility into the use of generative AI tools throughout an organization, enabling team leaders to better define access policies for each application.

They also use their own LLMs to detect and redact sensitive data such as personally identifiable information and corporate IP. Finally, they scrutinize each generative AI response, ensuring they do not contain harmful or toxic content, before being delivered to end users

Golan said his company empowers CISOs to become the enablers of generative AI within any organization, giving them a way to adopt the technology without introducing any security or privacy risks.

Holger Mueller of Constellation Research Inc. told SiliconANGLE that as generative AI impacts on the way we work and live, companies desperately need a way to safeguard their models and prevent people from getting them to misbehave, willingly or otherwise. “Logically, it makes sense to start with the user inputs, which in the case of LLMs is whatever prompt you’re using, and it’s no surprise that the innovation is coming from the startup field,” Mueller said. “Prompt Security has a most fitting name, and today’s funding round should encourage more enterprises to see what it can do.”

Hetz Ventures General Partner Pavel Livshiz said he and his partners has spent months looking for the right team in the generative AI security industry and ultimately settled on Prompt Security. “After getting to know [them], I can say without a doubt that they uniquely understand both the incredible potential of generative AI as well as the new attack surface that comes with it,” Livshiz said.

Photo: Prompt Security

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU