UPDATED 06:00 EDT / JUNE 28 2023

SECURITY

New report highlights security vulnerabilities in open-source AI projects

Generative artificial intelligence, through generative pre-trained transformers and large language models, have surged in popularity, but often lacking in the broader discussion around the sector are the security implications it brings to the table.

A new report from software supply chain security platform Rezilion Inc. released today looks into that very problem, identifying several security risks. Based on an investigation of the 50 most popular generative AI projects on GitHub, using the Open Source Security Foundation Scorecard as an evaluation tool, the report identifies risks in generative AI, including trust boundary risks, data management risks, inherent model risks and general security issues.

The researchers found that many AI models are granted excessive access and authorization, often without adequate security measures. An overall lack of maturity and basic security best practices in the open-source projects adopting these models, combined with excessive access and authorization, is said to create an environment ripe for potential breaches.

The report reveals that while often popular and cutting-edge, the models studied are relatively immature and exhibit poor security postures. Across the 50 models studied, the average score of 4.6 out of 10 on the OSSF Scorecard. The most popular GPT-based project on GitHub, Auto-GPT, was assessed to have a Scorecard score of 3.7.

“Generative AI is increasingly everywhere, but it’s immature and extremely prone to risk,” said Yotam Perkal, director of vulnerability research at Rezilion. “On top of their inherent security issues, individuals and organizations provide these AI models with excessive access and authorization without proper security guardrails.”

Perkal added that through the research, Rezilion aims to convey that the open-source projects that utilize insecure generative AI and LLMs have poor security posture, resulting in an environment with significant risk for organizations.

The report concludes with Rezilion recommending a set of best practices for the secure deployment and operation of generative AI systems. The recommendations include educating teams on the risks associated with adopting new technologies, closely monitoring security risks related to LLMs and open-source ecosystems, implementing robust security practices, conducting thorough risk assessments and fostering a culture of security awareness.

Image: Bing Image Creator

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU