JFrog report highlights critical security flaws in machine learning platforms
A new report out today from software supply chain company JFrog Ltd. reveals a surge in security vulnerabilities in machine learning platforms, highlighting the relative immaturity of the field compared with more established software categories.
The report digs into various vulnerabilities in open-source machine learning projects discovered by JFrog’s researchers, with a focus on server-side risks. One of the critical findings includes a directory traversal vulnerability in Weights & Biases’ Weave toolkit, designated CVE-2024-7340. The flaw allows low-privileged users to escalate their permissions, potentially gaining unauthorized access to sensitive files and escalating to an administrator role.
Another vulnerability detailed in the report, an improper access control vulnerability in ZenML Cloud, a platform for managing machine learning pipelines, was found to allow users with minimal permissions to elevate their status to full admin, granting them control over critical machine learning assets.
The report also touches on vulnerabilities in database frameworks optimized for artificial intelligence, such as Deep Lake, a database optimized for storing and managing large-scale AI and machine learning datasets, including vectors used in large language model applications. An identified command injection flaw designated CVE-2024-6507 was found to allow attackers to execute arbitrary system commands by exploiting weak input validation. The vulnerability could lead to remote code execution, exposing critical datasets and potentially compromising the integrity of models.
JFrog’s researchers also found a prompt injection vulnerability in Vanna AI, an open-source Python package that employs retrieval-augmented generation to help users generate accurate SQL queries from natural language inputs, which could lead to remote code execution. Researchers note the flaw demonstrates how attackers can manipulate natural language inputs to bypass predefined constraints, posing significant risks when these inputs are linked to actionable systems or processes.
All of the vulnerabilities were found to lead to remote code execution on a Mage.AI server. The unauthorized shell access vulnerability will provide root RCE directly since the attacker is given remote shell access with root permission.
“These vulnerabilities allow attackers to hijack important servers in the organization such as ML model registries, ML databases and ML pipeline,” the researchers warn. “Exploitation of some of these vulnerabilities can have a big impact on the organization — especially given the inherent post-exploitation vectors present in ML such as backdooring models to be consumed by multiple clients.”
Image: SiliconANGLE/Ideogram
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU