Lasso Security brings contextual data protection to generative AI applications
Lasso Security Inc., a generative artificial intelligence security company, today announced the launch of a custom contextual policy wizard aimed at helping companies avoid data leaks when working with everyday tools such as OpenAI’s chatbot ChatGPT.
The company provides large language model cybersecurity and data management for companies through an end-to-end solution. It detects what AI apps and tools employees are using and allows administrators to craft policies on their use to prevent data and knowledge leakage.
Data management has become increasingly more complex as AI tools in the workplace have become the norm. Even as more third-party large language model services have put in place security and privacy rules about data passing through them, it’s still important for internal compliance to be certain that employees aren’t accidentally using prompts they shouldn’t with outside tools.
In the past, this protection came from rules-based policies that used patterns to detect problematic prompts given to LLMs, but an employee could accidentally skirt a pattern by inputting a prompt. With Lasso’s new custom policy wizard that integrates with the company’s browser extension and secure gateway, administrators can set up policy guidelines in plain English.
“It’s all about the context,” Ophir Dror, co-founder and chief product officer at Lasso Security, told SiliconANGLE. “In order to solve the emerging problem of knowledge leak (as opposed to structured data leak) we completely shifted how we look at data protection. No more patterns or pre-defined regexes that fail to catch ‘near’ or ‘similar.’”
For example, if a policy forbids HR employees from discussing salaries, the AI engine will understand and block interactions that talk about wages, compensation and benefits in the context of the organization. However, it would still allow them to speak in terms of general and public knowledge dealing with salaries at the company.
The policies set by administrators are live and can be changed at any time by administrators. They receive alerts and telemetry about how they’re working along with a validation process that takes place upon creation of any rule. There is also a way to test policies against different use cases and a tuning process that allows employees to stay productive while they’re being adjusted.
From a user perspective, there are a few options that administrators can set based on organization policy. In most cases, if a problem is found, the session is simply blocked and requires the user to craft a new prompt in order to proceed. For administrators, an alert is generated and an entry is shown in the management console allowing the admin to investigate.
“In the era of generative AI, traditional data protection mechanisms are not enough anymore, said Dror. “Structured data is still a concern, but a new concern now emerges — knowledge leakage. When an employee is sending specs of your next features, designers send briefs of future models and finance personnel send budgets, the existing security stack fails.”
Image: Lasso Scurity
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU