

A new report out today from threat intelligence firm KELA Research and Strategy Ltd. reveals a 200% increase in mentions of malicious artificial intelligence tools on cybercrime forums through 2024, emphasizing how cybercriminals are rapidly embracing AI tools and tactics.
The finding comes from KELA’s 2025 AI Threat Report: How Cybercriminals are Weaponizing AI Technology, which used data from KELA’s intelligence-gathering platform that monitors and analyzes cybercrime underground communities, including dark web forums, Telegram channels and threat actor activity.
Along with finding a 200% increase in mentions of malicious AI tools, the report found a 52% rise in AI jailbreak discussions over the past year. Threat actors were found to be continuously refining AI jailbreaking techniques to bypass security restrictions in public AI systems.
Cybercriminals were found to be increasingly distributing and monetizing so-called “dark AI tools,” including jailbroken models and purpose-built malicious applications like WormGPT and FraudGPT. The tools are designed to automate core cybercriminal activities such as phishing, malware development and financial fraud.
By removing safety restrictions and adding custom capabilities, the AI systems lower the barrier for less skilled attackers to carry out complex attacks at scale.
On the phishing front, phishing campaigns were found to be becoming more sophisticated, with threat actors leveraging generative AI to craft convincing social engineering content, sometimes enhanced with deepfake audio and video to impersonate executives and deceive employees into authorizing fraudulent transactions.
AI was also found to be accelerating malware development, allowing for the rapid creation of highly evasive ransomware and infostealers, posing significant challenges for traditional detection and response methods.
“We are witnessing a seismic shift in the cyber threat landscape,” said Yael Kishon, AI product and research lead at KELA. “Cybercriminals are not just using AI – they are building entire sections in the underground ecosystem dedicated to AI-powered cybercrime. Organizations must adopt AI-driven defenses to combat this growing threat.”
To combat the rising AI-powered cyber threats, KELA recommends that organizations invest in employee training, monitor evolving AI threats and tactics and implement AI-driven security measures such as automated intelligence-based red teaming and adversary emulations for generative AI models.
THANK YOU