

The advancements in artificial intelligence technology have brought about a revolution by enabling machines to comprehend language and unstructured data.
Consequently, ChatGPT is bringing about a transformation in the cloud sector by leveraging AI. This enables companies to securely harness its enterprise data while offering a perspective on data across divisions within an organization.
Securiti Inc., a company that specializes in software for managing privacy, is taking advantage of these advancements with its data control center. The objective is to provide a platform that combines security, privacy and governance. Essentially, this simplifies compliance and improves the value generated within a business.
“Enterprise data sits in a very pristinely done, different apps, under different entitlements, under different security provisions,” according to Rehan Jalil (pictured), chief executive officer of Securiti. “If you give this data to the models without having all the controls in place, that’s not going to fly. To enable generative AI inside the enterprise, you basically need constructs in the guardrails to know that it’s being used safely. That’s what the company is all about.”
Jalil spoke with theCUBE industry analyst John Furrier at the “Cybersecurity” AWS Startup Showcase event, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed generative AI and how data is at the heart of making it safe and usable in the business environment. (* Disclosure below.)
The company’s customers have the option to connect their data in a setting of their choice to make use of the product. The data command center serves as a hub for analyzing and managing policies. The way the product is consumed is simply based on the amount of data used to generate value, according to Jalil.
“There should be one place, across public cloud, across SaaS, across private data centers, across data cloud, and having all key obligations around data be understood through one contextual insight around data, what we call data command fabric,” Jalil said. “With this data command fabric, you have full insights.”
Generative AI encounters obstacles, such as inaccuracies, hallucinations, data manipulation and bias fluctuations. However, it is essential to have control, evaluation and comprehension of the models, according to Jalil. Additionally, implementing safeguards and regulations regarding data is crucial to ensure the efficient utilization of AI in business settings.
“For generative AI, you need absolutely the same guardrails, because if you think about generative AI, the only two key things in there, a model, which is the amazing innovation, but for enterprise — if you want to use the model — is your data,” he said. “Without this data, there is no generative AI inside the enterprise. But to use this data, you have to make sure this is used in a much more safe manner.”
For developers, it is crucial to have access to a range of models. However, it is important to be diligent in evaluating and effectively managing them in order to avoid issues, such as AI poisoning or lobotomization, warned Jalil. Additionally, having an understanding of the risks associated with these models is essential. It is necessary to consider which data to utilize while mitigating these risks.
“Understanding the risks around the model, and contained around it, is the very first thing. The second category is all about what data can go into these models,” Jalil said. “Because if you send sensitive data, classified data, data that should not be seen by the people who are prompting on it, and if you send data that you don’t have consent for or data that you have governance and local regulations apply on it, you’re in thick soup.”
Neural networks that contain condensed knowledge present a danger because they can provide advice rooted in incorrect data resulting in misguided choices and the potential compromise of sensitive information, according to Jalil. The benefits of integrating technology into organizations are evident and substantial, prompting an eagerness to embrace it securely and efficiently.
“If you feed some data into these models, you can expect it will be taken out somehow, right? Even if you put a bunch of guardrails around your prompts,” Jalil said. “So, you better be very careful on what data you are feeding into these models itself. So, all these considerations, I would say, it’s not really an impairment.”
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE’s coverage of the “Cybersecurity” AWS Startup Showcase event:
(* Disclosure: Thoropass Inc. sponsored this segment of theCUBE. Neither Thoropass nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
THANK YOU