UPDATED 11:42 EST / AUGUST 22 2023

AI

Gen AI’s role in transforming industries, and the crucial need for data integrity

With generative artificial intelligence becoming one of the hottest topics on the planet, it has become a board-level conversation across every industry.

Since data is at the epicenter of gen AI, it should be given a keen eye, because it determines whether it will be a make-or-break affair, according to Rehan Jalil (pictured), chief executive officer of Securiti Inc.

“Without the data, there is no value that you can essentially create just like a human mind,” he said. “Human mind has all the neurons, but if you don’t learn anything, you don’t have any content through which you can learn and extend your imagination — it’s not as useful. If you want to use the data or use the gen AI for the enterprise use case, you have to utilize the data in the safest manner possible.”

Jalil spoke with theCUBE industry analyst Lisa Martin, during a CUBE Conversation ahead of the “Cybersecurity” AWS Startup Showcase event on September 14, an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed the revolutionary aspect of gen AI and the key role that data plays. (* Disclosure below.)

Controls are needed in gen AI to mitigate risks

Since the topic of improper context being ingested by gen AI models has become a burning issue, the right data controls are needed. As a result, four key areas should be given at most emphasis for risk mitigation purposes, including model safety, data usage, prompt safety and regulations, according to Jalil.

“I would bucketize the risk into four areas,” he noted. “The one is with the model itself. The model is something you’re going to trust; you’re going to ask for advice. The second important thing is what data goes into these models. The third category is called prompts. The prompt is where people ask questions and you’re prompting the systems to give you answers. The fourth category is actually regulation across all these three things.”

With the open-source narrative gaining steam as an ideal avenue for enhanced innovation, gen AI should borrow a leaf based on the rise of the data developer and the growth of open-source communities, according to Jalil. Nevertheless, safety should be the first-class citizen.

“When you enable the developers with all the right tools, and when you actually have an ecosystem in which things can come together into some environments in an open-source form, it simply enables innovation,” he said. “On the other side, large enterprises will certainly also want to make sure that through the open source, the things that are coming, they’re safe, these models are safe, they’re not been tweaked.”

Even though traditional AI was used to find patterns in data, gen AI is emerging as a game changer that digs deeper into natural language. This is a significant milestone, because out-of-the-box solutions will be triggered based on the computer’s capability to comprehend language, as this was only restricted to humans, according to Jalil.

“I think people realize that gen AI or the LLMs can understand the constructs of a natural language, the hidden mathematics that exists, not just in English, but any language, including computer languages,” he noted. “Now, that is super powerful because machines were not able to do that before. Only humans could do that.”

How explainability fits into the picture

Gen AI models have emerged as new and useful beasts that offer answers based on prior knowledge and data. Nevertheless, explainability comes in handy by ensuring that the information generated is not malicious or compromised through processes, such as large language model lobotomy, as Jalil put it.

“There are very explicit attacks on these models through training data or something called AI poisoning,” he stated. “You can poison these models of data or something that’s called LLM lobotomy. That’s why you want to understand when the answers come out, can you trust them. Humans give you an answer that you often don’t know how much bias is in there and explainability is just not there.”

A data command center is emerging as a fundamental ingredient in gen AI. This is because it’s a central place that enables the full comprehension of the data and context, prompting safety guardrails, according to Jalil.

“What we’ve learned from some of our largest customers in the finance, airlines, insurance and other large Fortune 500 companies is that they want to have … what we call a data command center,” he said. “Who should have access to data, who should not have access to data, what are the security controls, what are the privacy controls … you needed to understand what data could or should go to these models.”

Here’s the complete CUBE Conversation, part of SiliconANGLE’s and theCUBE’s pre-event coverage of the “Cybersecurity” AWS Startup Showcase event:

(* Disclosure: Securiti Inc. sponsored this segment of theCUBE. Neither Securiti nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)

Photo: SiliconANGLE

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU