UPDATED 08:00 EST / JUNE 06 2023

SECURITY

New report warns of cybersecurity risks in generative AI technology

Artificial intelligence is the “in” thing in 2023, all the way through an unexpected surge in stock prices on the Nasdaq Inc. Exchange. However, hackers target all new, shiny things, and that’s the main takeaway from a new report from Vulcan Cyber Ltd.

The report raises critical awareness about the potential cybersecurity risks associated with the rapid proliferation of generative AI technology, particularly with the likes of OpenAI LP’s ChatGPT. The report notes that while AI offers much promise, it’s not without potential security pitfalls.

Vulcan Cyber’s report leads with the discovery that hackers can easily use ChatGPT to help them spread malicious packages into developers’ environments. According to Vulcan’s researchers, the risk is real due to the widespread adoption of AI across practically all business use cases, the nature of software supply chains and the prevalence of open-source code libraries.

The concept at the core of the warnings in the report is what the researchers call “AI package hallucination.” AI systems like ChatGPT generate seemingly plausible but ultimately nonexistent coding libraries in certain instances. When these “hallucinated” packages are suggested by ChatGPT, a malicious actor could create and publish a harmful package using the same name, making a secure environment vulnerable to unsuspected cyber threats.

The researchers warn that with the increasing reliance on AI tools for professional tasks, this potential vulnerability could expose users to cybersecurity threats. Developers who now look to AI like ChatGPT for coding solutions rather than traditional platforms, like Stack Overflow, might unknowingly install these malicious packages, exposing the broader enterprise.

Although the Vulcan Cyber researchers note that more attention must be paid, the potential vulnerability doesn’t mean AI should be stopped. Instead, the report calls for increased vigilance and proactivity, particularly from developers increasingly using AI in their everyday work.

It’s argued that developers should be discerning in validating the libraries they use, especially when these are recommended by AI. Developers should take steps to verify the legitimacy of a package before installation, considering factors such as the package’s creation date, number of downloads, comments and any attached notes.

“It can be difficult to tell if a package is malicious if the threat actor effectively obfuscates their work or uses additional techniques such as making a trojan package that is actually functional,” the researchers note. “Given how these actors pull off supply chain attacks by deploying malicious libraries to known repositories, it’s important for developers to vet the libraries they use to make sure they are legitimate.”

Given how popular AI is at the moment, the report rightly illuminates a significant cybersecurity threat rapidly emerging through the widespread use of generative AI technologies.

The researchers call for greater vigilance and proactivity, particularly from developers, encouraging thorough validation of libraries recommended by AI platforms. It underscores the need for balance in embracing AI’s immense potential and conscientiously addressing the associated risks.

Image: Vulcan Cyber

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU