![](https://d15shllkswkct0.cloudfront.net/wp-content/blogs.dir/1/files/2025/01/AIhackers.jpeg)
![](https://d15shllkswkct0.cloudfront.net/wp-content/blogs.dir/1/files/2025/01/AIhackers.jpeg)
A new report released today by Google LLC’s Threat Intelligence Group details how advanced persistence threat groups and coordinated information operations actors from countries such as China, Iran, Russia and North Korea are using generative artificial intelligence in their campaigns — but despite some headlines to the contrary, it’s not quite as bad as it could be.
The report, which focuses on interactions with Google’s AI assistant Gemini, found that allegedly government-backed threat actors have primarily used Gemini for common tasks such as reconnaissance, vulnerability research and content generation. What wasn’t found is that the hacking groups were using Gemini for more malicious purposes, such as developing new AI-driven attack techniques or bypassing its built-in safety mechanisms.
The Google analysts found that instead of using AI to revolutionize their attacks, APT and IO actors appear to be leveraging it to speed up routine tasks rather than create novel threats. The report highlights that Gemini’s safeguards blocked direct misuse, preventing it from being used for phishing, malware development or infrastructure attacks.
Among the notable findings, Iranian APT and IO actors were the most frequent users of Gemini, using it for research and content creation, while Russian APT actors showed limited interaction with the AI model. Chinese and Russian IO actors, on the other hand, were found to be using Gemini primarily for localization and messaging strategy rather than direct cybersecurity threats.
“For skilled actors, generative AI tools provide a helpful framework, similar to the use of Metasploit or Cobalt Strike in cyber threat activity,” the report notes. “For less skilled actors, they also provide a learning and productivity tool, enabling them to more quickly develop tools and incorporate existing techniques.”
The report adds that current large language modes on their own are not a game-changer for cybercriminals but acknowledges that this could change with the evolving nature of AI development. As new AI models and agent-based systems emerge, the researchers believe that threat actors will continue to experiment with generative AI, requiring continuous monitoring and updates to security frameworks.
To mitigate risks current and future, Google is actively refining Gemini’s security measures and sharing intelligence with the broader cybersecurity community. The report stresses the need for cross-industry collaboration to ensure AI remains a tool for security rather than exploitation.
THANK YOU