UPDATED 06:00 EDT / JUNE 28 2023

SECURITY

New report highlights security vulnerabilities in open-source AI projects

Generative artificial intelligence, through generative pre-trained transformers and large language models, have surged in popularity, but often lacking in the broader discussion around the sector are the security implications it brings to the table.

A new report from software supply chain security platform Rezilion Inc. released today looks into that very problem, identifying several security risks. Based on an investigation of the 50 most popular generative AI projects on GitHub, using the Open Source Security Foundation Scorecard as an evaluation tool, the report identifies risks in generative AI, including trust boundary risks, data management risks, inherent model risks and general security issues.

The researchers found that many AI models are granted excessive access and authorization, often without adequate security measures. An overall lack of maturity and basic security best practices in the open-source projects adopting these models, combined with excessive access and authorization, is said to create an environment ripe for potential breaches.

The report reveals that while often popular and cutting-edge, the models studied are relatively immature and exhibit poor security postures. Across the 50 models studied, the average score of 4.6 out of 10 on the OSSF Scorecard. The most popular GPT-based project on GitHub, Auto-GPT, was assessed to have a Scorecard score of 3.7.

“Generative AI is increasingly everywhere, but it’s immature and extremely prone to risk,” said Yotam Perkal, director of vulnerability research at Rezilion. “On top of their inherent security issues, individuals and organizations provide these AI models with excessive access and authorization without proper security guardrails.”

Perkal added that through the research, Rezilion aims to convey that the open-source projects that utilize insecure generative AI and LLMs have poor security posture, resulting in an environment with significant risk for organizations.

The report concludes with Rezilion recommending a set of best practices for the secure deployment and operation of generative AI systems. The recommendations include educating teams on the risks associated with adopting new technologies, closely monitoring security risks related to LLMs and open-source ecosystems, implementing robust security practices, conducting thorough risk assessments and fostering a culture of security awareness.

Image: Bing Image Creator

A message from John Furrier, co-founder of SiliconANGLE:

Support our open free content by sharing and engaging with our content and community.

Join theCUBE Alumni Trust Network

Where Technology Leaders Connect, Share Intelligence & Create Opportunities

11.4k+  
CUBE Alumni Network
C-level and Technical
Domain Experts
15M+ 
theCUBE
Viewers
Connect with 11,413+ industry leaders from our network of tech and business leaders forming a unique trusted network effect.

SiliconANGLE Media is a recognized leader in digital media innovation serving innovative audiences and brands, bringing together cutting-edge technology, influential content, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — such as those established in Silicon Valley and the New York Stock Exchange (NYSE) — SiliconANGLE Media operates at the intersection of media, technology, and AI. .

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a powerful ecosystem of industry-leading digital media brands, with a reach of 15+ million elite tech professionals. The company’s new, proprietary theCUBE AI Video cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.