UPDATED 09:00 EDT / DECEMBER 08 2025

SECURITY

Resemble AI hauls in $13M for its different approach to deepfake detection

Resemble AI, a Toronto and San Francisco-based startup focused on securing generative artificial intelligence systems, today announced a $13 million funding round and the launch of what it calls the industry’s strongest deepfake detection model.

The investment brings the company’s total funding to $25 million.

Resemble AI’s technology is urgently needed. A recent study by Surfshark revealed that Deepfake-enabled fraud has accounted for more than $1.56 billion in business and consumer losses this year. However, that’s just a fraction of what’s likely to come, as the technology is set to become much more widely adopted by cybercriminals. According to a recent forecast by Deloitte Touche Ltd., deepfake-related fraud is likely to top $40 billion by the end of 2027.

The company’s technical strategy differs from that of most players in the escalating race to identify synthetic media. It’s one grounded in synthetic data generation, architecture-level awareness of emerging generative models and a design that the company claims can withstand the same evasive techniques adversaries use to beat today’s detectors.

At the center of the announcement is Detect-3B Omni, a 3 billion-parameter multimodal detection model that Resemble says delivers 98% accuracy across 38 languages and ranks first on Hugging Face Inc.’s audio and image deepfake detection leaderboards. The model stems from a technical philosophy that rejects the idea that deepfake detection is primarily about pattern-spotting in real-world data, said founder and CEO Zohaib Ahmed.

“The reason deepfakes are nearly impossible to detect is that you need a ton of synthetic data, and synthetic data isn’t readily available,” he said. “You have to compute and create it.”

Pivot from media generation

The company was founded in 2018 as a provider of voice and media generation tools. It pivoted to security, having already built its own generative models, produced large volumes of synthetic training sets and studied the subtle artifacts left behind by different model architectures.

Deepfakes have become so sophisticated that even some makers of detection tools say the best ones are nearly impossible to detect. Most tools struggle because they rely heavily on real-world examples of AI-generated media, Ahmed said. New generative architectures emerge monthly, and adversaries can introduce small transformations, such as a compression effect, that push detectors beyond the limits of their training.

“Most deepfake detection models right now are extremely fragile,” he said. “If you apply a filter or compress the audio, it throws off the model completely.”

Resemble AI says it solves this problem by building what Ahmed described as an “inverse generative model.” Since generative systems predict the next token in a sequence, Resemble trains Detect-3B to identify the mathematical traces of those predictions, even when they’re invisible to human listeners or viewers. The approach works, he said, because detection does not depend on recognizing a specific identity or media sample.

The model requires no enrollment of a person’s voice or face, unlike biometric authentication systems. Instead, Detect-3B performs frame-by-frame and pixel-level analysis on raw signals. “It’s not looking at context. It’s not doing transcription,” Ahmed said. “It’s looking purely at the raw signal and the artifacts that come from generative architectures.”

This allows the detector to operate in real time in audio and video calls without relying on prior samples of a participant’s voice or appearance. Ahmed said the simplicity of the integration is important because the highest-growth area for deepfakes is now corporate fraud.

“Executives and CEOs are being spoofed,” he said. “What used to be consumer fraud and political fraud is now multimodal attacks targeting enterprises.” A favorite tactic of attackers is to spoof audio or video messages from executives instructing others to transfer money to fraudulent accounts.

Replay resilience

One of Resemble’s differentiators is its focus on replay-attack resilience. Attackers often generate fake audio, record it on a separate device and replay it over a conferencing platform. Most detectors fail in this scenario because the re-recording process alters the signal enough to evade pattern-matching models, Ahmed said. Resemble published a paper earlier this year on how replay attacks undermine audio deepfake detection.

The product’s multilingual detection is a key feature, the CEO said. It’s trained to understand subtle signal-level artifacts that vary by language model training data. “Language is a particular type of data,” he said. “Compression is another. Our model has to handle all of them.”

Ahmed said Resemble has built a scalable data pipeline that continuously produces new synthetic media based on recently published model architectures, including those released within the past few months. “We’re very deep in the ecosystem,” he said. “We understand architecture changes ahead of time.”

To update Detect-3B quickly, the company uses a modular fine-tuning approach inspired by adapter-based training in large language models. Instead of spending weeks retraining the full multimodal model, the company can add or adjust components to gain coverage on new generative techniques. “At this point we can get there in under an hour,” Ahmed said.

At 3 billion parameters, Detect-3B is substantially larger than many existing detectors. That requires a lot of computing power, but Ahmed said Resemble AI’s relationship with investor Google LLC’s AI Futures Fund has helped ensure resource availability.

Google support

In addition to Detect-3B, the company has Resemble Intelligence, an explainability and reporting layer that combines outputs from multiple detection models with Google’s Gemini 3. Ahmed emphasized that Gemini is not the detector but an orchestration system. “What we send into Gemini is our own models,” he said. “It has the context it needs because we feed it detection, identity and watermarking signals.”

That prevents the company from being a single point of failure. So does using multiple independent AI models, Ahmed said, because it reduces the chance any one model can be evaded.

The company said it will use the fresh capital to expand its engineering team and strengthen partnerships with investors including Okta Ventures LLC, Sony Group Corp.’s Innovation Fund and Google, several of whom are integrating the technology into their own security ecosystems.

Other investors include the University of California at Berkeley’s CalFund, Berkeley Frontier Fund, Comcast Ventures LLC, Craft Ventures LLC, Gentree Fund Pte. Ltd. IAG Capital Partners LLC, Javelin Venture Partners Management LLC, KDDI Corp.’s Open Innovation Fund, Taiwania Capital Management Corp. and Ubiquity Ventures Management LLC.

Stephen Lee, Okta Inc.’s vice president of technical strategy and partnerships, said Resemble AI is addressing an urgent need in the cybersecurity sector. “Its technology provides the kind of AI-powered signal verification that will be critical to strengthening the identity security fabric and protecting trust across authentication, workforce and customer experiences,” he said.

Ahmed acknowledged that deepfake detection is becoming more difficult as new tactics emerge, but he said defeatism isn’t an option. “Just because something is difficult doesn’t mean despair is the only choice,” he said. “The only way to fight AI is with AI.”

With Mike Wheatley

Image: SiliconANGLE/Ideogram

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.