Report from Black Hat: Many questions, few answers as cybersecurity world confronts AI threats
Experienced whitewater rafting practitioners know that when they reach a bend in the river and hear the sound of crashing water but can’t see what’s ahead, it’s a good time to pull to the nearest bank and scout the course. Amid the current explosion of generative artificial intelligence use cases, the cybersecurity industry is having its whitewater moment.
That moment was on full display during the Black Hat 2023 gathering of cybersecurity researchers in Las Vegas this week. Multiple presenters spoke about the rapidly changing AI landscape in terms characterized more by questions than answers.
Are threat actors actively using ChatGPT? Probably, but evidence for it so far is scarce. Can large language models be poisoned to generate malicious actions? Definitely, yet how models can be manipulated for maximum damage remains under study.
Despite the current state of uncertainty, security researchers and executives are clear on one point: Generative AI is going to reshape the cybersecurity world in a transformation few could have ever imagined.
Three years ago, Dave DeWalt, previously chief executive of McAfee and FireEye and founder of NightDragon, characterized cyber “supercycles” as key periods that drove growth within the cybersecurity industry. In an exclusive interview with SiliconANGLE, DeWalt spoke of the significance behind the current AI wave.
“The threat just scales all over the place,” DeWalt said. “Generative AI creates this entire new model. This supercycle looks like it might be one of the biggest cycles I’ve ever seen.”
Hands-on analysis
Security researchers have been scrambling to gather information for a clearer picture of emerging threat vectors triggered by growing use of generative AI. There were a number of companies at Black Hat this week that set up labs for attendees to experiment with various security techniques involving AI. Nvidia Corp., whose DGX systems are instrumental in how companies refine data and process AI, collaborated on a number of workshops for researchers to vet security and experiment with AI safety tools.
The hands-on research came with a sense of urgency, as the ability of malicious actors to develop and deploy sophisticated new threats has shifted into a higher gear. At one conference presentation, researchers provided a glimpse into humanlike digital twins, the creation of online personas, sometimes even equipped with 100% digitally generated fake photos, that allow for realistic conversational interaction.
Researchers from Beyond Layer Seven LLC. described an incident where a malicious actor leveraged AI to pose as a therapist in an online service and persuaded one user to commit suicide.
“Humans can attack these models and these models can very quickly turn into advanced threat actors,” Dr. Ben Sawyer, an applied neuroscientist at the University of Central Florida, said during the presentation on Thursday. “Digital twins are clearly well equipped to attack humans. This is a really new ecosystem that is growing very fast.”
The ability through generative AI to mimic human interaction is also putting a turbo charge into phishing attacks. In one Black Hat presentation on Wednesday, a researcher from Harvard University presented evidence that previous clues of spelling errors or bad grammar found in fraudulent emails have now been taken completely off the table thanks to large language models.
“It’s now super easy to create phishing emails that are very efficient,” said Fredrik Heiding, research fellow at the Harvard John A. Paulson School of Engineering. “You don’t need to know much.”
Leveraging data for social engineering
The damage caused by sophisticated phishing exploits could soon pale in comparison with one of the more far-reaching uses of social engineering. Security researchers are becoming increasingly more concerned about the use of generative AI for spreading disinformation or programmatically controlling public opinion, often through widely used social media channels.
“It’s a serious, serious threat,” Dan Woods, global head of intelligence at F5 Inc., said in an interview with SiliconANGLE. “We can all be influenced to believe things that are not true. I believe that social media companies have a huge problem with fake accounts. There’s not enough attention on it now.”
One concern in the security community is that generative AI has opened the floodgates for the collection of a massive amount of information. ChatGPT and other powerful AI engines depend on ingestion of a significant amount of data, collected through web scraping or uploaded from massive datasets in order to train models.
Zoom Video Communications Inc. recently clarified its AI data collection practices following a change in the company’s terms of service to use information for model training purposes without first obtaining user permission.
“What I do on Zoom they can now use to train their system,” Jeff Moss, president of DEF CON Communications Inc. and founder of Black Hat, said during his opening keynote remarks on Wednesday. “I’m not cool with that, but there’s no button I can push that says, ‘Don’t train me, bro.’ This is going to be the next battle on the internet.”
Government joins the chat
A combination of mass data collection and a growing threat landscape from the use of AI is leading to a rise in activity on the regulatory front. In Europe, lawmakers have proposed an “AI Act” where detailed copies of copyrighted data used in training models would be made publicly available.
One area that has received particular scrutiny from regulators in recent months has been open source. The European Union is currently considering approval of the Cyber Resilience Act, an effort to improve cybersecurity in the EU by creating common standards for products with digital elements.
However, the current proposal would hold open-source developers liable for security issues in commercial products, a provision that Brian Fox, co-founder and chief technology officer at Sonatype Inc., has called a “death-knell” for open source in the EU.
“I don’t know how it’s going to shake out,” Fox told SiliconANGLE. “The challenge that we have right now is raising awareness. People need to be concerned and raise their voice.”
Open-source security is beginning to receive more attention in the U.S. as well. On Thursday, the Office of the National Cyber Director or ONCD published a request for information to better understand open-source security and develop policies to improve it.
In an appearance at Black Hat on Wednesday, Kemba Walden, the acting national cyber director in the ONCD, expressed surprise that security concerns within the open-source developer community had not been addressed before.
“I was stunned to find out that the developer community isn’t necessarily or always trained on security-by-design,” Walden said. “How do we make [open source] more secure is a fundamental question.”
The government’s interest in open-source developers has been fueled by vulnerabilities found in widely used open-source tools such as Log4J, a popular Java logging framework. Yet the urgency may also be driven by a reality that developers have become prime targets in the current threat environment.
“We understand the developer experience and developer workflows probably better than anyone else in this space,” Alexis Wales, vice president of security operations at GitHub, told SiliconANGLE. “Developers are being targeted. We’re seeing that across a whole bunch of threat actors right now.”
Targeting critical infrastructure
Between attacks on developers, concerns around open-source vulnerabilities, and the potential for a wave of new threats created by widespread use of AI, the security community has plenty to be worried about. Perhaps lost in the noise is the continued prospect of a significant breach affecting critical infrastructure.
The Colonial pipeline ransomware attack and an attempt to tamper with one region’s water supply are two noteworthy examples. In an interview for this story, the leading executive of one cybersecurity firm dedicated to protecting critical infrastructure expressed concern that it may well be only a matter of time before the world receives an abrupt wakeup call.
“I’m concerned that our critical infrastructure in the western world is not protected well enough,” said Claroty Inc. CEO Yaniv Vardi. “I do think a catastrophic event will happen, definitely sooner than later. The focus will be critical infrastructure.”
Throughout the presentations at Black Hat and among those interviewed for this story, no one was prepared to express optimism that the world’s infatuation with generative AI will result in a “killer app” for protecting systems and networks. Nor was anyone willing to predict that threat actors will triumph and rule the world.
“Both forces are going to drive each other,” Simran Khalsa, staff security researcher at Fastly Inc., said in an interview. “We are going to have to continually evolve with it.”
Photos: SiliconANGLE
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU