SECURITY
SECURITY
SECURITY
At this time last year, the cybersecurity world appeared to be successfully balancing the use of artificial intelligence to prevent attacks and defending against adoption of the same tools by hostile nation-states and malicious actors. The story is different now.
Strong winds are buffeting the tightrope and there is legitimate concern that protection of global networks could fall into the digital abyss. At this week’s gathering of cybersecurity professionals for RSAC in San Francisco, a steady drumbeat of keynotes and side sessions offered evidence that threat actors have not only adopted AI, but they are having success in using autonomous technology to fuel identity-based attacks, large-scale denials of service, and poisoning of the software supply chain.
The speed at which AI is being used against an expanded enterprise attack surface, coupled with the lack of a clear plan to deal with it, has some leading figures in the cybersecurity world sounding alarms.
“It’s a breakneck pace that I’ve never seen in my career in technology,” George Kurtz, president and chief executive of CrowdStrike Inc., said in his keynote remarks at the conference on Tuesday. “The problem is we’re doing 200 miles per hour in the car and we’re arguing about what radio station to listen to.”
Rapid development of AI-powered voice and image technology has led to fresh business opportunities for bad actors in recent months. A report from Microsoft Corp. documented how a group operating out of North Korea has been using AI to create highly convincing emails and fake personas capable of speaking on the phone or appearing in a Zoom call with realistic human capabilities.

Mitiga’s Brian Contos spoke to RSAC attendees about rising identity theft and deepfakes.
This group is using AI to bypass traditional job screening controls and get hired in information technology roles at global firms. “Identity is still the No. 1 access vector,” Brian Contos, field chief information security officer at Mitiga Inc., said in a presentation during RSAC on Monday. “AI is amplifying identity-based attacks. Adversaries no longer break-in, they log-in.”
Cybersecurity provider Cloudflare Inc. has also documented AI’s use in identity fraud, noting that it”s fueling a rise in insider exploitation. AI is enabling threat actors to fabricate profiles, go through deepfake interviews with audio manipulation, gain trusted access and then exfiltrate intellectual property from unsuspecting firms, according to Blake Darché, head of Cloudforce One and Threat Intelligence at Cloudflare.
“All of these threat actors are clearing these background checks,” Darché said. “They can pretty much use their company account to do anything.”
The premium currently being placed on deepfake impersonation has also led to a rise in credential stealing. One recent exploit in this area has caught the attention of top security researchers at Google LLC’s Mandiant Consulting division.
Earlier this month, Aqua Security Software Ltd. disclosed that attackers exploited a misconfiguration in a GitHub environment affecting Trivy, an open-source vulnerability scanning tool commonly used in the DevSecOps community. The attackers inserted malicious “infostealer” code into the tool and force updated existing version tags. Infostealer malware is designed to breach systems and steal sensitive data, such as log-in credentials, financial details and personal information.
Aqua Security has been publishing daily updates on the intrusion, indicating earlier today that it had progressed into the “remediation and documentation phase.” In a briefing for the media during RSAC on Monday, Charles Carmakal, chief technology officer at Mandiant Consulting-Google, indicated that his group was assessing the projected downstream impact of the supply chain breach on software as a service.
“We know over 1,000 impacted SaaS environments right now that are actively dealing with this particular threat campaign,” Carmakal said. “That thousand-plus downstream victims will probably expand into another 500, another 1,000, maybe another 10,000. We’ll see this playing out over the next several weeks and months.”
The shifting security landscape has also seen new holes develop through vulnerabilities in AI tools. One of the most popular AI agents this year has been OpenClaw, open-source software designed to run continuously and act on behalf of users.
A February report from Security Scorecard Inc. warned that vulnerabilities in OpenClaw deployments were leaving tens of thousands of internet-facing instances exposed to takeover. When Nvidia Corp. announced last week that it would launch its own enterprise version of OpenClaw, named NemoClaw, the firm also noted that it would implement a set of security protocols for the offering that added privacy and cybersecurity guardrails, along with limits to the agent’s network access.
In a discussion of OpenClaw during one RSAC briefing on Monday, Ken Huang, project lead on the OWASP AIVSS Project that scores AI vulnerabilities, described what he termed the “lethal trifecta” for today’s AI agents: private data, untrusted content and external communication.
“In order for you to deploy an OpenClaw strategy, you first need to have an OpenClaw security strategy,” Huang told attendees.
AI’s capability for faster and more accurate compute operations has turbocharged the distributed denial-of-service or DDoS attack business. Cloudflare’s researchers have noted that 2025 was a record year in hyper-volumetric DDoS strikes. In a conference presentation this week, Cloudflare disclosed a 730% increase in DDoS attacks over the past 15 months.
Attackers are using AI to automate target reconnaissance, optimize timing and generate evasive traffic patterns that can bypass traditional defenses. The creation of large, self-managing botnets, such as Aisuru, has facilitated sizable attacks that can cripple infrastructure and crash legacy cloud-based DDoS protection solutions.
Last week, the U.S. Department of Justice announced that it had participated in a court-authorized law enforcement operation to disrupt Aisuru and three other global botnets. As noted by the DOJ, these botnets were capable of generating DDoS attacks at over 30 terabits per second, the largest ever seen.
“Do you have enough capacity to handle a 30-terabits-per-second attack?” Cloudflare Chief Security Officer Grant Bourzikas said in a presentation on Tuesday. “A 34-terabits-per-second attack is pretty much going to knock people out. Ask your vendor, ‘What is your capacity on a DDoS attack?’”
Can the cybersecurity community combat AI-fueled vulnerabilities and attacks with AI? Recent announcements and interviews with security practitioners suggest this will be the game plan going forward.
On Monday, Google announced that its threat disruption unit would pursue a strategy of technical takedowns, legal action and product hardening to combat growing threats. “We must move toward a philosophy of active defense,” said Google Vice President of Threat Intelligence Sandra Joyce. “This is not hacking back. This is a legal and ethical use of intelligence to protect our own platforms.”
Even before Google’s news this week, companies in the cybersecurity world have been pursuing strategies to leverage AI in defense against bad actors. In January, Zscaler Inc. expanded its AI Security Suite with new features to provide enterprises with greater visibility and control over how AI is being used across environments.

Zscaler’s Deepen Desai and Dhawal Sharma spoke at RSAC about the latest threats and how to use AI tools for defense.
This has become more of an issue in the security world, since AI traffic often doesn’t look like normal user activity. Many security tools were not built to spot anomalies when automated systems hold conversations with each other. Zscaler’s tools are designed to show where AI is running, who can access it and what data it touches.
“It’s even more important this is done by organizations given the agentic workflows,” Deepen Desai, chief security officer at Zscaler, said in an interview with SiliconANGLE. “You will have to lean in on your security controls, the platform you have picked.”
Key enterprise players such as Snowflake Inc. are also leaning into AI-grounded security tools that can reduce exposure and enhance governance. Last week, Snowflake announced that governance and security management startup Bedrock Data’s AI-driven protection would be integrated into the Snowflake AI Data Cloud platform.
Security professionals have no illusions about the challenges that AI has brought to the hard work of protecting critical platforms and data. AI will play a role in defense and whether that is a successful strategy or not will depend on progress in combatting many of the threats discussed at RSAC this week.
“It’s always a cat-and-mouse game,” Snowflake CISO Brad Jones, told SiliconANGLE. “There’s a lot of good things that have come out of the last year that have shored up our defense. It’s AI for security and security for AI.”
Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.
Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.