Google Cloud debuts threat intelligence service, AI security tools at RSA
Google LLC’s cloud unit debuted a raft of new cybersecurity tools at the RSA Conference today, including a threat intelligence service that will provide customers with information about hacker activities.
Many of the additions are rolling out as updates to existing products. Some of the new tools are designed to fend off cyberattacks that target a company’s foundation models, while others use such models to help remediate breaches.
A new source of cybersecurity data
One way companies block hacking attempts is by tracking cybercriminals’ activities and identifying when they deploy a new tactic that might pose a risk. Using this information, administrators can harden the corporate network against the new tactic to reduce the chances of a breach. Companies source data about hacker activities from so-called threat intelligence services.
Google Threat Intelligence, the first new offering that Google Cloud debuted at RSAC today, is the search giant’s entry into this product category. It makes the data that the Alphabet Inc. unit collects about hacking campaigns available to customers for use in their breach prevention efforts.
One of the data sources on which Google Threat Intelligence draws is the company’s Mandiant unit. The unit, which provides breach detection and remediation services, investigates about 1,100 hacking incidents every year. Google Threat Intelligence provides access to the data that Mandiant collects from those investigations and through its hacker monitoring efforts.
The service also draws on several information sources. It uses data from VirusTotal, a Google service that allows cybersecurity professionals to upload suspicious files and check if they’re indeed malicious. Additionally, Google Threat Intelligence incorporates data that the search giant collects about the cyberattacks that target its users’ 1.5 billion Gmail accounts and four billion devices.
The service includes an embedded version of the search giant’s Gemini 1.5 Pro large language models. Customers can use the AI to automatically reverse engineer malware and reveal its source code. A cybersecurity team could, for example, analyze a ransomware strand to find the code snippet that unscrambles decrypted files.
“It was able to process the entire decompiled code of the malware file for WannaCry in a single pass, taking 34 seconds to deliver its analysis and identify the killswitch,” Google Cloud Security Vice President Sunil Potti and Sandra Joyce, vice president of Google Threat Intelligence, wrote in a blog post.
Another Gemini-powered feature promises to speed up so-called entity extraction. Administrators can use the capability to quickly aggregate information about a hacking group, its targets, breach tactics and related details.
Gemini comes to Google Security Operations
Google Security Operations is a cloud service that companies can use to scan telemetry from their Google Cloud environments for breach indicators. As part of a new release of the service detailed at RSA today, the search giant is adding a Gemini-powered analytics tool. Google says it significantly speeds up the task of finding technical information about a potential breach and determining how to respond.
“It can help reduce the time security analysts spend writing, running, and refining searches and triaging complex cases by approximately sevenfold,” Google Cloud Product Management Director Chris Corde wrote in a blog post. “Security teams can search for additional context, better understand threat actor campaigns and tactics, initiate response sequences and receive guided recommendations on next steps — all using natural language.”
The update also introduces a number of other AI capabilities. One of the new capabilities can monitor a company’s cloud environment for malicious activity and, when it identifies a new breach tactic, automatically create a so-called detection to address it. A detection is a software workflow designed to spot a specific type of hacking tactic.
A feature called Playbook Assistant will make it easier for cybersecurity teams to create playbooks, another type of cybersecurity automation workflow. Such workflows take steps to mitigate a breach without the need for manual input, which speeds up response times. A playbook can, for example, automatically isolate a virtual machine if an antivirus determines that it may contain malware.
Much of the breach data that companies use in cyberattack investigations is sourced from system logs. Often, different logs are organized in different ways, which requires administrators to turn the data into a common format before analyzing it. Google Security Operations is receiving a feature that will automate the task of extracting information from log files to save time for cybersecurity teams.
AI security in focus
A third set of new features is rolling out for Google Security Command Center Enterprise, a cybersecurity platform that the search giant introduced in March. It’s designed to help companies more efficiently tackle vulnerabilities and breach attempts.
The first upgrade to the platform that debuted at RSAC today is a tool called Notebook Security Scanner. It’s designed to detect vulnerabilities in notebooks, coding environments that developers often use to build AI models. Notebooks’ flagship feature is that they can turn a piece of code into a functioning program nearly instantly, which makes it possible to quickly test the results of code changes.
Notebook Security Scanner is designed to spot vulnerabilities introduced by open-source components. There are many open-source tools that promise to ease the task of building AI applications. As a result, there is a strong likelihood that a software team working on a new neural network will incorporate at least some publicly available code into its notebooks.
Notebook Security Scanner is joined by a second new tool, Model Armor, that will become available in preview next quarter. It’s designed to help companies filter malicious AI prompts and block harmful outputs. The tool can fend off, among other threats, prompt injection attacks, cyberattacks that use malicious input to trick an LLM into disclosing sensitive data or producing erroneous responses.
Image: Google
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU