UPDATED 09:00 EDT / AUGUST 15 2024

SECURITY

New report identifies critical vulnerabilities found in open-source tools used in AI

A new report released today by Protect AI Inc. has detailed a range of new vulnerabilities found in artificial intelligence systems as the AI market and the tools therein continue to expand and grow at a rapid pace.

The vulnerabilities were found through Protect AI’s “huntr” AI and machine learning bug bounty program, which has more than 15,000 community members hunting for impactful vulnerabilities across the entire open-source software supply chain. According to the company, the findings highlight that the tools used in the supply chain to build the machine learning models that power AI applications are vulnerable to unique security threats.

Protect AI notes that open-source tools highlighted in the monthly report have been downloaded thousands of times a month to build enterprise AI systems. The tools are also likely to have come out of the box with vulnerabilities that can lead directly to complete system takeovers such as unauthenticated remote code execution or local file inclusion.

The full report contains 20 vulnerabilities, but notable among them were critical vulnerabilities found in tools such as Setuptools, Lunary and Netaddr.

A vulnerability in Setuptools — a Python package that is used in AI models to manage and install Python libraries and dependencies required for building, training and deploying models — allows attackers to execute arbitrary code on the system using specially crafted package URLs. The vulnerability in Setuptools is a result of the way it handles package URLs, allowing for code injection: If attackers can control the URL input, they can inject and execute arbitrary commands on the system using an AI model that relies on Setuptools.

Lunary — a developer platform designed to manage, improve and protect applications built with large language models — was found to have an authorization bypass vulnerability. That allows removed users to continue accessing, modifying and deleting organizational templates using outdated authorization tokens, leading to potential unauthorized data manipulation.

The third vulnerability, found in Netaddr — a Python library used for network address manipulation used in AI projects involving network data or infrastructure — is a server-side request forgery vulnerability. It can be used to bypass SSRF protections and potentially allow access to internal networks.

All the vulnerabilities disclosed in the report were relayed to maintainers a minimum of 45 days prior to publication. Protect AI also worked with the maintainers to provide a timely fix before sharing the details publicly. The three highlighted vulnerabilities were all patched with new releases prior to publication.

Image: SiliconANGLE/Ideogram

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU