UPDATED 14:00 EST / SEPTEMBER 06 2023

SECURITY

It’s the summer of adversarial chatbots. Here’s how to defend against them

This has been the summer of adversarial chatbots.

Researchers from SlashNext Inc. and Netenrich discovered two such efforts, named WormGPT and FraudGPT. These cyberattack weapons are certainly just the beginning in a long line of products that will be developed for nefarious purposes such as creating very targeted phishing emails and new hacking tools. Both are being sold across the dark web and via various Telegram forums.

This summer demonstrated that generative artificial intelligence is quickly moving into both offensive and defensive positions, with many security providers calling out how they are using AI methods to augment their defensive tools. The AI security arms race has begun.

But part of this arms race also carries important messages for businesses too. Information technology managers have to get smarter about detecting AI-generated threats and use better network and security telemetry for quicker and more granular response.

The subscription fee for FraudGPT ranges from $90 to $200 per month and $800 to $1,700 per year. The tool has accumulated thousands of reviews. Pricing on WormGPT is about $100 a month with discounts for annual subscriptions. Pricing varies depending on where the tool is purchased.

“It is unclear whether these price differences stem from the Dark Web’s monetization board policies, the author’s personal greed, someone trying to mimic the author’s work, or from an intentional effort to capitalize on the high demand,” analyst Arthur Erzberger wrote in a post on Trustwave’s blog, Nevertheless, the prices make it attractive for anyone, regardless of skill or resources, to make use of these adversarial tools.

How the adversarial chatbots are different

Both WormGPT and FraudGPT are freed from the ethical guardrails that legitimate chatbot tools have. Typically, those guardrails mean the bots can’t directly answer queries about illegal activity, hacking methods or the like. However, Erzberger was able to manipulate ChatGPT to offer up similar phishing lures and other responses to WormGPT after careful prompt engineering.

Analyst Daniel Kelley posted on SlashNext’s blog that it’s observing offers across the dark web of other types of prompt engineering to manipulate ChatGPT and other legitimate chatbots how to do this. Still, he said, “WormGPT produced an email that was not only remarkably persuasive but also strategically cunning, showcasing its potential for sophisticated phishing attacks.”

One possible innovation and warning sign is that that unlike other chatbots, FraudGPT is trained on dark web data to reflect its hacking perspective. What makes this more dangerous and significant is that it can detect context and understand user intent in the query prompts.

Abnormal Security’s Emily Burns calls FraudGPT “a cyberattacker’s starter kit,” putting various tools and techniques together in one place.

“As time goes on, criminals will find further ways to enhance their criminal capabilities using the tools we invent,” security analyst  Rakesh Krishnan wrote in Netenrich’s blog earlier this summer, calling FraudGPT the villain avatar of chatbots. “While organizations can create ChatGPT and other tools with ethical safeguards, it isn’t a difficult feat to reimplement the same technology without those safeguards.”

Planning for better AI-based defenses

There are several techniques that businesses can implement today to improve their defenses against adversarial chatbots. First and foremost is to update their phishing training programs to educate users about the nature of AI-based threats. Part of this awareness training is to sensitize users to be more discerning when it comes time to download apps that purport to be legit chatbots, since hackers have already tried – and will continue to try – to pollute app stores with malware that uses similar names to the legit chatbots.

Second, they should implement better and more granular network and security telemetry to detect these threats and react quickly to mitigating them.

Next, corporate policies should be crafted to prevent users from uploading proprietary data to any chatbot, whether legit or not. That’s because this data can become available for future AI training sessions and could be used in an adversarial weapon. Many companies have already put this in place and restrict these activities.

Finally, IT managers also should stay abreast of new AI efforts to understand how they’re being used as force multipliers for hackers. The opening salvos with FraudGPT and WormGPT are just the version 1.0 of what promised to be a long line of revisions that will become more clever, more lethal, and more trouble. “Over time, these technologies will only continue to improve,” said Erzberger.

“AI could ultimately widen cybercriminals’ reach and help them orchestrate attacks faster,” ReHack Features Editor Zac Amos posted on HackerNoon. “Conversely, many cybersecurity professionals use AI to increase threat awareness and speed remediation.”

Image: Bing Image Creator

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU