![https://pixabay.com/illustrations/world-ciber-attacks-keyboard-cyber-2030121/](https://d15shllkswkct0.cloudfront.net/wp-content/blogs.dir/1/files/2023/09/world-2030121_1280.jpg)
![https://pixabay.com/illustrations/world-ciber-attacks-keyboard-cyber-2030121/](https://d15shllkswkct0.cloudfront.net/wp-content/blogs.dir/1/files/2023/09/world-2030121_1280.jpg)
As expected, this summer has seen a rise in various cybersecurity threats based on deepfake audio and video impersonations.
Despite warnings from the Federal Bureau of Investigation in June, it’s now quite common to experience these types of threats. The fakes are used to lend credibility to larger exploits, such as for a phishing email lure or a request from a superior. These can run the gamut of executive impersonation, performing various forms of financial fraud and obtaining stolen account credentials.
One early case of using deepfake audio was with the targeting of a chief executive of a U.K.-based energy company. It was used to steal $243,000 in 2019. But this seems almost quaint now, thanks to fakes that leverage better computing power, easily customize applications, and provide more thorough integration with artificial intelligence techniques and models.
Earlier this week the U.S. National Security Agency, the Cybersecurity Infrastructure and Security Agency and the FBI jointly published their suggestions in the document Contextualizing Deepfake Threats to Organizations. The posting has an overview of various deepfake media threats, techniques and trends. The document cites numerous examples of how deepfakes have entered the cyber criminal’s toolkit, such as applications that “have the ability to lift a 2D image to 3D to enable the realistic generation of video based on a single image,” and fake audio clips that can be generated with just a few seconds of a genuine recording of a speaker.
Tracking down the fakes isn’t impossible. Back in 2022, Kerry Tomlinson of the computer security training organization SANS Institute published this early guide to carefully scrutinize of images carefully, such as studying the background, a subject’s eye and strangely attached jewelry or misshaped shoulders or other clues.
But there are some new tools available, and a large section of the joint government document is devoted to offering recommendations for security professionals to spot and potentially stop the fakes from proliferating across their organizations.
These include:
“Threat actors are constantly evolving and finding ways to circumvent defensive measures. This calls for the need for a multifaceted approach,” said Veridas CEO Eduardo Azanza. Defensive measures are works in progress, and the challenge will be whether they can stay ahead of attackers who are building more advanced methods to create ever more convincing fakes.
THANK YOU