UPDATED 16:39 EST / JANUARY 29 2025

SECURITY

Google report finds state-based hackers are using AI for research and content generation

A new report released today by Google LLC’s Threat Intelligence Group details how advanced persistence threat groups and coordinated information operations actors from countries such as China, Iran, Russia and North Korea are using generative artificial intelligence in their campaigns — but despite some headlines to the contrary, it’s not quite as bad as it could be.

The report, which focuses on interactions with Google’s AI assistant Gemini, found that allegedly government-backed threat actors have primarily used Gemini for common tasks such as reconnaissance, vulnerability research and content generation. What wasn’t found is that the hacking groups were using Gemini for more malicious purposes, such as developing new AI-driven attack techniques or bypassing its built-in safety mechanisms.

The Google analysts found that instead of using AI to revolutionize their attacks, APT and IO actors appear to be leveraging it to speed up routine tasks rather than create novel threats. The report highlights that Gemini’s safeguards blocked direct misuse, preventing it from being used for phishing, malware development or infrastructure attacks.

Among the notable findings, Iranian APT and IO actors were the most frequent users of Gemini, using it for research and content creation, while Russian APT actors showed limited interaction with the AI model. Chinese and Russian IO actors, on the other hand, were found to be using Gemini primarily for localization and messaging strategy rather than direct cybersecurity threats.

“For skilled actors, generative AI tools provide a helpful framework, similar to the use of Metasploit or Cobalt Strike in cyber threat activity,” the report notes. “For less skilled actors, they also provide a learning and productivity tool, enabling them to more quickly develop tools and incorporate existing techniques.”

The report adds that current large language modes on their own are not a game-changer for cybercriminals but acknowledges that this could change with the evolving nature of AI development. As new AI models and agent-based systems emerge, the researchers believe that threat actors will continue to experiment with generative AI, requiring continuous monitoring and updates to security frameworks.

To mitigate risks current and future, Google is actively refining Gemini’s security measures and sharing intelligence with the broader cybersecurity community. The report stresses the need for cross-industry collaboration to ensure AI remains a tool for security rather than exploitation.

Image: SiliconANGLE/Microsoft Designer

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU