AI has potential to automate threat detection, transform cybersecurity
The fanfare around artificial intelligence may be dying down but according to security experts, its impact on defense strategy, red teaming and other aspects of cybersecurity could be long lasting.
Large language models have the potential to reduce threat analysis by as much as 80 percent, estimates Vicente Diaz (pictured), threat intelligence strategist at VirusTotal, a crowdsourced threat intelligence platform acquired by Google LLC.
“All the hype around AI, but the thing is that actually it works for different stuff,” Diaz said. “We are making advances. It’s not like we’ve solved security yet, of course, but we are making everyone’s life easier. In this case, what we are analyzing is how LLMs can help us to analyze malware binaries, reverse engineering, which basically means spending a lot of time, needing a lot of expertise to make sense of what the malware is doing. And, well, if an LLM can do it for us, that’s a nice step forward.”
Diaz spoke with theCUBE Research’s John Furrier and Savannah Peterson at mWISE 2024, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed AI’s contributions to security infrastructure and code generation. (* Disclosure below.)
AI transforms red teaming, threat testing
Cybersecurity is currently experiencing the first wave of use cases for LLMs. These include identifying malware behavior and analyzing key parts of its code.
“Everything you need to do for this spend testing, for this red teaming, you can use for LLMs to create codes for you to analyze the staff and to give you the best way to go to give you an answer to something that is not trivial,” said Diaz. “And little by little, we are getting to the point that they are able to orchestrate everything for us and find answers to complex questions … We can expect that we can to some extent maybe fully automate this in the future. Maybe we can have a constant red teaming exercise going on and evolving.”
Social engineering attacks have been some of the most common threats and according to Diaz, security teams are still working on how much data is being generated for malicious purposes. He emphasizes that the human factor is still crucial for cybersecurity and our exact role in the process will likely transform as AI grows more advanced.
“Artificial intelligence was just a boring thing more related to math,” he said. “And now you see the real implications that look like magic … We had constraints in the past. One of the biggest ones was the size of the prompt that we could use. Now that this is changing and as LLMs are evolving very fast, we don’t have these constraints … With all this together, we are starting to get better and better results.”
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE Research’s coverage of mWISE 2024:
(* Disclosure: Google Cloud Security sponsored this segment of theCUBE. Neither Google Cloud Security nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
Photo: SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU