UPDATED 08:00 EDT / MARCH 25 2025

SECURITY

Malicious AI tool mentions surge 200% across dark web channels in 2024

A new report out today from threat intelligence firm KELA Research and Strategy Ltd. reveals a 200% increase in mentions of malicious artificial intelligence tools on cybercrime forums through 2024, emphasizing how cybercriminals are rapidly embracing AI tools and tactics.

The finding comes from KELA’s 2025 AI Threat Report: How Cybercriminals are Weaponizing AI Technology, which used data from KELA’s intelligence-gathering platform that monitors and analyzes cybercrime underground communities, including dark web forums, Telegram channels and threat actor activity.

Along with finding a 200% increase in mentions of malicious AI tools, the report found a 52% rise in AI jailbreak discussions over the past year. Threat actors were found to be continuously refining AI jailbreaking techniques to bypass security restrictions in public AI systems.

Cybercriminals were found to be increasingly distributing and monetizing so-called “dark AI tools,” including jailbroken models and purpose-built malicious applications like WormGPT and FraudGPT. The tools are designed to automate core cybercriminal activities such as phishing, malware development and financial fraud.

By removing safety restrictions and adding custom capabilities, the AI systems lower the barrier for less skilled attackers to carry out complex attacks at scale.

On the phishing front, phishing campaigns were found to be becoming more sophisticated, with threat actors leveraging generative AI to craft convincing social engineering content, sometimes enhanced with deepfake audio and video to impersonate executives and deceive employees into authorizing fraudulent transactions.

AI was also found to be accelerating malware development, allowing for the rapid creation of highly evasive ransomware and infostealers, posing significant challenges for traditional detection and response methods.

“We are witnessing a seismic shift in the cyber threat landscape,” said Yael Kishon, AI product and research lead at KELA. “Cybercriminals are not just using AI – they are building entire sections in the underground ecosystem dedicated to AI-powered cybercrime. Organizations must adopt AI-driven defenses to combat this growing threat.”

To combat the rising AI-powered cyber threats, KELA recommends that organizations invest in employee training, monitor evolving AI threats and tactics and implement AI-driven security measures such as automated intelligence-based red teaming and adversary emulations for generative AI models.

Image: SiliconANGLE/Reve

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU