Scammers used deepfake CFO on video call to trick company employee into sending them $25M
Scammers who used artificial intelligence-powered “deepfakes” to pose as a multinational company’s chief financial officer in a video call were able to trick an employee into sending them more than $25 million, CNN reported.
The finance worker was duped into making a video call with the purported CFO and several other senior executives at the company, but although they looked and sounded convincingly real, they were in fact deepfakes, Hong Kong police said in a statement Friday.
According to CNN, the victim was sent an email that claimed to be from the company’s CFO. The employee initially suspected the message was a phishing email, as it asked for a large amount of money to be transferred into an offshore account. However, the scammers managed to erase any doubts by inviting the employee to attend a video call, where the supposed CFO and several other colleagues he recognized were in attendance.
Believing all of the participants on the call to be real, the employee agreed to send more than $200 million Hong Kong dollars (about $25.6 million) to a specified account, senior superintendent Baron Chan Shun-ching said in a statement. “(In the) multi-person video conference, it turns out that everyone [he saw] was fake,” Chan told RTHK, the city’s public broadcaster.
The scam was only identified several days later, when the employee became concerned over the transfer and checked with the corporate head office. Neither the worker, nor the company, has been identified.
Deepfakes are videos that have been manipulated by computers, often using AI, to make people appear to say or do something they never did, or to appear in places they weren’t. Thanks to advances in AI, deepfakes have become more convincing than ever before, and they’re often used to defame people in the public eye.
Some deepfakes look incredibly realistic, and so it’s no surprise that the technology is being abused by criminals in some very inventive ways to facilitate scams. The Hong Kong police said that it alone has come across more than 20 cases that involved the use of AI deepfakes to trick facial recognition systems by imitating people on identity cards.
Superintendent Chan said the police recently arrested six people in connection with a scam that involved eight stolen Hong Kong identity cards. The scammers used the cards to create deepfakes that could fool facial recognition systems, and then applied for more than 90 loan applications and bank account registrations in the last year.
“The presentation attack employed by the threat actors targeting this multinational company for millions showcased a high level of sophistication,” Kevin Vreeland, general manager of North America at the authentication firm Veridas, told SiliconANGLE in an email. “The employee initially followed proper protocols, correctly identifying the attack as potentially rooted in phishing. However, the escalation of the incident highlights how artificial intelligence has given attackers a leg up and created a plethora of security challenges for organizations, particularly in the era of widespread remote work.”
Vreeland said companies should implement updated and improved methods of verification and authentication. “These measures should focus on detecting the liveness and proof-of-life of their employees,” he said. “It’s also important that companies educate their employees about the dangers of deepfakes similar to other types of scams. Deepfakes usually contain inconsistencies when there is movement. For example, an ear might have certain irregularities, or the iris doesn’t show the natural reflection of light.”
Deepfakes have also been used by individuals in an effort to manipulate political elections. In a recent example, a fake audio recording of U.S. President Joe Biden was distributed to New Hampshire-based Democrats via robocalls, asking them to refrain from voting in the presidential primary.
Last year, another deepfake of the Senator Elizabeth Warren appeared on X, formerly Twitter, with her reportedly saying that Republicans should not be allowed to vote in the 2024 presidential election. The video was quickly identified as being fake, but was viewed more than 189,000 times in one week, prior to being taken down.
Another recent victim of deepfakes was the pop star Taylor Swift, who was said to be “furious” that a number of sexually explicit deepfake images and videos of her appeared on X in January.
Last month, a new nonprofit organization called TrueMedia announced it’s creating a new tool that will use AI to combat deepfakes in order to prevent them from circulating misinformation in the 2024 elections.
TrueMedia said it will analyze hours of deepfake footage in order to train an AI model that can detect fake videos and audio. To do this, it’s soliciting the public to submit examples of deepfakes so it can build up a more comprehensive training dataset. The group hopes to launch a free, web-based version of its tool in the first quarter of the year. It will first be made available to journalists, fact-checkers and online influencers, the organization said.
Other companies, including Intel Corp. and Meta Platforms Inc., have also attempted to build tools that can root out deepfakes.
Image: Riki32/Pixabay
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU