UPDATED 09:00 EDT / JULY 13 2023

SECURITY

Cybercriminals are using custom ‘WormGPT’ for business email compromise attacks

A new report published by cybersecurity startup SlashNext Inc. today warns that cybercriminals are using generative artificial intelligence, including a custom-built tool, to undertake nefarious activities.

The rise of AI over the last year has been well-documented, but often ignored is that the sophistication of AI has also introduced a new vector for business email compromise attacks. Although OpenAI LP’s ChatGPT gains much of the attention in the AI market, hackers are also using its “black hat” alternative, WormGPT, to create persuasive, personalized emails, significantly increasing the success rate of such attacks.

Initially developed in 2021, WormGPT is an AI model built on the GPTJ language model that offers enhanced features, including unlimited character support, chat memory retention and code formatting capabilities. Unlike its ethical counterparts, WormGPT has been designed specifically for malicious activities and has been observed to produce cunning and persuasive BEC emails.

The SlashNext researchers explain in the report that the use of generative AI in BEC attacks offers considerable advantages to cybercriminals. The AI can produce emails with exceptional grammar, making them seem legitimate and reducing the likelihood of being flagged as suspicious. The technology also lowers the threshold for executing sophisticated BEC attacks, making it accessible to a broader spectrum of cybercriminals, irrespective of their skill levels.

As generative AI continues to evolve, the researchers warn that the measures employed to safeguard against its misuse must also evolve.

The first recommendation is that companies should invest in BEC-specific training — not exactly a new recommendation, but the SlashNext researchers also recommend that the training should include the role AI can play in augmenting these threats. Second, organizations should enhance their email verification measures, including systems that provide alerts when emails impersonate internal executives or vendors, and keyword-detection software that flags messages containing specific terms linked to BEC attacks.

Mike Parkin, senior technical engineer at software security startup Vulcan Cyber Ltd. told SiliconANGLE that it’s no surprise that cybercriminal groups have gone this route.

“Conversational AI like ChatGPT and its kin are good at sounding like a real person,” Parkin said. “That makes it a lot easier for a criminal operator who might have English as their second or third language to write convincing hooks. Creating a phishing email is almost the exact opposite of creating malicious code in that a good social engineering hook will strive for clarity rather than obscurity.”

Image: Bing Image Creator

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU