Immersive Labs warns generative AI bots are highly vulnerable to prompt injection attacks
A new report from cybersecurity training company Immersive Labs Inc. released today is warning of a dark side to generative artificial intelligence that allows people to trick chatbots into exposing company secrets.
The “Dark Side of GenAI” report delves into the generative AI-related risk of a prompt injection attack. It involves individuals inputting specific instructions into generative AI chatbots to trick them into revealing sensitive information and potentially exposing organizations to data leaks.
Based on analysis undertaken by Immersive Labs through its “prompt injection challenge,” the report finds that 88% of prompt injection challenge participants tricked the generative AI bot into giving away sensitive information in at least one level of the increasingly difficult challenge. Some 17% of participants tricked the bot across all levels, underscoring the risk presented by such large language models.
Takeways from the study include that users can leverage creative techniques to deceive generative AI bots, such as tricking them into embedding secrets in poems and stories or by altering their initial instructions to gain unauthorized access to sensitive information.
The report also found that users don’t have to be experts in AI to exploit generative AI. Non-cybersecurity professions and those unfamiliar with prompt injection attacks were found to be able to leverage creativity to trick bots, indicating that the barrier to exploiting generative AI in the wild using prompt injection attacks is lower than otherwise would be hoped for.
The report notes that as long as bots can be outsmarted by people, organizations are at risk. No protocols that exist today were found to prevent prompt injection attacks completely, creating an urgent need for AI developers to prepare and respond to the threat to mitigate potential harm to people, organizations and society.
“Based on our analysis of the ways people manipulate gen AI, and the relatively low barrier to entry to exploitation, we believe it’s imperative that organizations implement security controls within large language models and take a ‘defense in depth’ approach to gen AI,” said Kev Breen, senior director of Threat Intelligence at Immersive Labs and a co-author of the report. “This includes implementing security measures, such as data loss prevention checks, strict input validation and context-aware filtering to prevent and recognize attempts to manipulate gen AI output.”
Breen added that given the potential reputation harm is clear, “organizations should consider the tradeoff between security and user experience, and the type of conversational model used as part of their risk assessment of using gen AI in their products and services.”
Image: ChatGPT 4o
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU