UPDATED 09:00 EST / DECEMBER 02 2025

AI

Identity verification startup Incode launches Deepsight to detect deepfake video calls

Identity verification and authentication startup Incode Technologies Inc. is turning its attention to the challenge of artificial intelligence deepfakes with the launch of a new defensive system called Deepsight.

With Deepsight, the startup is taking on a very specific problem with deepfakes, when artificial intelligence is used to generate a fake likeness of real people and impersonate them during video calls.

This year has seen a big rise in the number of scammers using AI to clone voices, mimic faces and impersonate real people. Imagine a scenario where someone gets a frantic call from a best friend. Their voice is shaky as the friend explains they have been in an accident and urgently need some money. The caller recognizes the friend’s voice immediately – they’ve known them for years, after all – but it’s possible that voice might not be real.

The increase in this kind of scam has been staggering. According to Identity Theft Resource Center’s 2025 Trends in Identity report, cybercriminals are using advanced deepfake technologies to make such deception almost impossible for humans to detect, which is why such incidents have increased by 148% this year. Humans simply cannot know who they’re dealing with anymore, which is why tools such as Deepsight are such an essential development.

Incode says the ability to be able to tell real people apart from AI-generated fakes has become table stakes for security, and that’s exactly what Deepsight aims to do. The new multimodal AI system is able to analyze video, motion and depth data simultaneously during video calls to surface any potential inaccuracies or inconsistencies that cannot easily be reproduced by generative AI.

As Incode founder and Chief Executive Ricardo Amper explains, deepfakes are no longer just novelties posted on social media making fun of people and spreading misinformation. They have evolved to become a major weapon for fraudsters. “When identity can be faked, everything breaks,” he said. “Deepsight restores trust by ensuring every capture shows a human user in front of the camera, not a deepfake.”

The Deepsight system employs a three-layer defense to help understand if a video caller is a real human or an AI fake, and the company claims that it’s so accurate that not only will it identify any caller who’s AI-generated, but even identify the generative AI model used to create it.

Deepsight’s behavioral layer works by spotting any subtle interaction anomalies from AI bots or fraud farms, while its integrity layer is used to verify the authenticity of the camera and device being used, so it can detect virtual media. Finally, there’s a perception layer that helps to tell apart AI deepfakes from human users by analyzing modalities such as video, motion and depth.

The company says Deepsight’s effectiveness has already been proven in both the lab and in real world settings. In a recent study by Purdue University, which evaluated 24 deepfake detection systems, Deepsight achieved the highest accuracy and the lowest false acceptance rate among all commercially available tools, outperforming both academic and government models.

Shu Hu, assistant professor at the School of Applied and Creative Computing and director of the Purdue Machine Learning and Media Forensics Lab at Purdue University, said it achieved the highest accuracy in identifying fake samples. “This outcome suggests that Incode demonstrates stronger robustness and reliability in challenging real-world scenarios,” he added.

Meanwhile, Incode’s own internal tests reveal that Deepsight was ten-times more accurate than trained human reviewers at spotting AI-generated video callers. The results of these tests suggest that advanced systems are now essential for anyone who thinks they’re likely to become the victim of a deepfake scam.

Incode said the Deepsight system has already been deployed at a number of enterprises, including TikTok’s parent company Bytedance Ltd. and banks such as PNC Bank, Scotiabank and Nubank. It added that it’s helping to protect millions of users across more than 6 million live identity sessions so far. The system can be accessed now as part of the Incode Identity Platform, which also provides tools for KYC onboarding, authentication, step-up verification, workforce access and age verification.

“AI will change how we live, work and connect,” Amper said. “Our responsibility is to make sure it does not destroy the trust that holds it all together.”

Image: SiliconANGLE/Meta AI

A message from John Furrier, co-founder of SiliconANGLE:

Support our mission to keep content open and free by engaging with theCUBE community. Join theCUBE’s Alumni Trust Network, where technology leaders connect, share intelligence and create opportunities.

  • 15M+ viewers of theCUBE videos, powering conversations across AI, cloud, cybersecurity and more
  • 11.4k+ theCUBE alumni — Connect with more than 11,400 tech and business leaders shaping the future through a unique trusted-based network.
About SiliconANGLE Media
SiliconANGLE Media is a recognized leader in digital media innovation, uniting breakthrough technology, strategic insights and real-time audience engagement. As the parent company of SiliconANGLE, theCUBE Network, theCUBE Research, CUBE365, theCUBE AI and theCUBE SuperStudios — with flagship locations in Silicon Valley and the New York Stock Exchange — SiliconANGLE Media operates at the intersection of media, technology and AI.

Founded by tech visionaries John Furrier and Dave Vellante, SiliconANGLE Media has built a dynamic ecosystem of industry-leading digital media brands that reach 15+ million elite tech professionals. Our new proprietary theCUBE AI Video Cloud is breaking ground in audience interaction, leveraging theCUBEai.com neural network to help technology companies make data-driven decisions and stay at the forefront of industry conversations.