New report details how cybercriminals are leveraging AI in email attacks
A new report today from business email compromise cybersecurity startup Abnormal Security Corp. details how threat actors are increasingly harnessing the power of generative artificial intelligence to produce more convincing phishing and malware attacks.
The report covers how cybercriminals exploit AI platforms like OpenAI LP’s ChatGPT and Google Bard to craft impeccably written, error-free emails that could be mistaken for legitimate communications.
Abnormal Security’s researchers analyzed a series of recent attacks and discovered that these AI-generated emails could take various forms, ranging from credential phishing to vendor fraud and sophisticated BEC schemes. In previous years, the presence of grammar mistakes, awkward language or other inconsistencies made phishing emails easier to spot. Now, AI-generated content makes these cyberthreats indistinguishable from normal business communication, amplifying the challenge for companies and individuals.
One particular trend highlighted in the report is the use of AI in impersonation attacks. In one example, a phishing attack impersonating Facebook warned users that their page violated community standards and was unpublishable. The email was free of grammatical errors and the tone and style were similar to official Facebook communications.
Another example in the report was a payroll diversion scam, where an employee was impersonated to request updates to direct deposit information. The email had a professional tone and no identifiable indicators of compromise, demonstrating the potential hazards of AI-powered phishing.
The report serves as a warning of the use of AI in phishing and other scams, but it also outlines strategies to counter AI-enabled threats. According to the researchers, the best way to detect an AI-generated email is to use AI itself. In Abnormal’s case, its platform analyzes the text of suspicious emails to assess the likelihood that each word was predicted by an AI language model.
Although using AI to detect AI can potentially flag non-AI-generated emails, it’s argued that it serves as a reliable indicator of possible AI involvement in the composition of an email. Combined with other signals, AI analysis is crucial in detecting and preventing malicious intent in an age where hackers use AI themselves.
“Generative AI will make it nearly impossible for the average employee to tell the difference between a legitimate email and a malicious one, which makes it more vital than ever to stop attacks before they reach the inbox,” the report concludes. “Modern solutions … use AI to understand the signals of known good behavior, creating a baseline for each user and each organization and then blocking the emails that deviate from that — whether they are written by AI or by humans.”
Image: TheDigitalArtist/Pixabay
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU