UPDATED 09:00 EDT / MAY 22 2024

SECURITY

Generative AI services have driven a huge surge in phishing attacks

A new report released today by phishing protection company SlashNext Inc. details a massive increase in malicious emails, much of it driven by artificial intelligence services such as ChatGPT.

The SlashNext State of Phishing 2024 mid-year assessment covers some objectively massive increases in malicious emails and that’s saying something in an age where increases in cyberattacks and malicious emails aren’t something that hasn’t been seen before. In the last six months, malicious emails, as tracked by SlashNext, increased by 341% and over the last 12 months, by an incredible 856%.

Then the numbers get even larger yet. SlashNext reports that there has been more than a 41-fold increase in malicious emails since the launch of ChatGPT on Nov. 30, 2022. “Cybercriminals had swiftly adapted, using large language model chatbots to launch a multitude of highly targeted phishing attacks at an alarming scale,” the report notes.

Generative AI is also noted as being the new favorite tool among cybercriminals for business email compromise attacks, with BEC attacks growing 27% over the last six months. Over the same period, credential phishing grew 217%, making it the No. 1 access point for breaches. The report says it’s big business for hackers looking for access to deploy ransomware and steal data and intellectual property.

The report also highlights the rise of attacks based on CAPTCHAs, those common tests intended to be solvable only by humans — particularly using Cloudflare Inc., which are being used to mask credential harvesting forms. Attackers are generating thousands of domains and implementing Cloudflare’s CAPTCHAs to hide credential phishing forms from security protocols that are unable to bypass the CAPTCHAs.

In another sign of the times, QR code-based attacks were also found to be growing in popularity and now comprise 11% of all malicious emails – often embedded in legitimate infrastructures.

Darren Guccione, co-funder and chief executive of cybersecurity company Keeper Security Inc., told SiliconANGLE that as we have seen AI services such as ChatGPT gain ground in practical utilization, cybersecurity has become “an arms race and bad actors are constantly evolving their tools to circumvent detection, while defenders are trying to adapt.”

“A bad actor can utilize ChatGPT a number of ways, including to create convincing phishing emails,” Guccione said. “By leveraging ChatGPT or the natural language processing capabilities of other generative AI tools, bad actors can quickly and easily craft sophisticated messages tailored to specific individuals or organizations, making it more likely for recipients to fall victim to them.”

Guccione also warned that “AI in the hands of adversaries has the potential to ramp up social engineering exponentially,” given that it’s currently one of the most successful scamming tactics available.

Image: SlashNext

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU