What is a deepfake?

With the rapid development of neural network AI technology, the accuracy of many computer vision applications has greatly improved. Following Google’s introduction of Transformer architecture in 2017, numerous AI applications have been developed by major tech companies and academic institutions based on that technology, officially ushering in the era of generative AI (GenAI). Today, the rapid advancement of generative AI has swept the globe. Through learning from vast amounts of data, AI can create diverse content based on textual prompts, giving a new boost to human creativity.

However, these technological advancements not only bring many positive applications but also present new challenges, the most controversial of which is deepfake technology. "Deepfake" is a combination of "deep learning" and "fake," referring to the use of deep learning technology to generate fake images, videos, or sounds. This technology can simulate someone's facial expressions and voice in real life, creating things they have never said or done, making it difficult for viewers to distinguish between real and fake, thereby creating confusion and misunderstanding.

When a deepfake is used appropriately, it can bring significant value. For example, in the movie Star Wars: Rogue One, deepfake technology was used to bring a young version of Princess Leia back to the screen; in Fast & Furious movies, deepfake technology allowed the late Paul Walker's image to continue on. Similarly, at NVIDIA’s 2021 GTC21 conference, deepfake technology was used to generate a video segment of NVIDIA’s CEO, Jensen Huang, demonstrating the immense potential of generative AI in marketing video production, simplifying the production process, and significantly reducing costs. Additionally, the U.S. startup HereAfter AI even uses deepfake technology to enable people to "converse" with deceased loved ones, creating more human-centric applications of the technology.

NVIDIA used deepfake technology in 2021 to generate a video segment of NVIDIA CEO Jensen Huang's speech.

NVIDIA used deepfake technology in 2021 to generate a video segment of NVIDIA CEO Jensen Huang's speech.
Source: CNET

The dangers of deepfakes

Fake News

As a powerful tool, deepfakes can have a significant negative impact if misused by malicious actors. For example, Ukrainian YouTuber Olga Loiek became a victim of this technology. Individuals with ulterior motives generated numerous fake videos on social media platforms like YouTube using her facial image and voice, depicting her as supporting Russia or claiming that she loves China, among many fabricated statements. These videos were not created by her and they successfully misled many viewers, generating false international public opinion.

Ukrainian YouTuber Olga Loiek exposes how she was subjected to AI face-swapping for pro-China propaganda.

Ukrainian YouTuber Olga Loiek exposes how she was subjected to AI face-swapping for pro-China propaganda.
Source: Olga Loiek YouTube

Political and Election Interference

The dangers of deepfakes are not limited to fake news; they can also create serious interference in politics and elections. During this year’s 2024 U.S. presidential campaign, several incidents have already emerged involving the use of deepfake technology to generate fake results. These false portrayals attempt to interfere with voters' judgment, posing a significant threat to electoral fairness and democratic institutions. For example, former President Trump shared a photo created using deepfake technology that depicted mega-popstar Taylor Swift expressing support for him. This fabricated content can easily mislead the public and adversely affect voter decisions.

Former President Trump shared this image created using deepfake technology that shows mega-popstar Taylor Swift expressing support for him.

Former President Trump shared this image created using deepfake technology that shows mega-popstar Taylor Swift expressing support for him.
Source: TMZ

Fraud & Crime

Deepfakes have been used to replace the faces of celebrities or public figures (usually women) into explicit, pornographic videos. This not only can create severe damage to the victims reputation, but can also lead to significant psychological trauma. In the UK this year, a new law was passed where the creation of sexually explicit deepfake images is now a criminal offence, and anyone found making explicit images of a person without their consent could face fines, including possible jail time, and have a criminal record.

Additionally, deepfake technology has been used in commercial fraud. In one significant fraud case in early 2024, criminals used deepfake technology to generate fake video conferences, simulating the voices and appearance of company executives. The duped finance worker thought they were participating in a video conference with the company's CFO. In reality, the video and audio were all generated in real-time by hackers using deepfake technology. Believing the meeting to be real, the financial officer transferred $25 million USD based on the forged instructions.

Although face-swapping, video editing, and post-production techniques have existed for a long time, deepfake technology has taken these operations to a new level, making it difficult for the human eye to discern between what is real and what is fake, and even allowing for real-time generation of false content. Because deepfake tools can be easily downloaded and used from the internet, their potential harm is nearly impossible to measure.

Regulations related to deepfakes

To address the potential risks posed by deepfake technology, many countries have begun to establish regulations to govern its application. The European Union's Artificial Intelligence Act(EU AI Act)clearly stipulates that companies using deepfake technology must embed digital watermarks in the generated images, allowing detection technologies to easily identify whether the content was generated by AI. Furthermore, individuals, companies, or organizations using deepfake technology must clearly label the generated content as AI-generated, enabling viewers to recognize it immediately.

The United States has also established corresponding regulations to mitigate the risks posed by deepfake technology. Texas passed a law in 2019 that prohibits the use of deepfake technology to create or distribute politically related images, aiming to prevent the use of false videos to attack candidates or interfere with election results. Additionally, California's AB 602 and AB 730 bills explicitly restrict the use of deepfake content, specifically targeting election-related and pornographic content.

Social media platforms have also begun to recognize the impact of deepfake technology. Since 2020, Facebook has prohibited users from posting any generated videos with misleading intent, while YouTube updated its privacy policy and platform regulations in 2023 to strengthen the management of deepfake content. However, these regulations and policies primarily target law-abiding users, and it is still not possible to completely eliminate the malicious actions of those who intentionally misuse deepfake technology.

In Taiwan, the Financial Supervisory Commission (FSC) announced the "Core Principles and Policies for the Use of AI in the Financial Industry" in October 2023. This policy specifies that when banks conduct electronic banking services via video conferencing, they must confirm that the participants are real individuals to prevent identity fraud through technological means, such as pre-recorded videos, masked simulated images, or deepfake technology. Additionally, verification records of the relevant video and transaction processes must be retained for future reference. This policy effectively enhances the financial industry's ability to mitigate the risks posed by deepfake technology.

Challenges deepfakes pose to eKYC verification

eKYC (Electronic Know Your Customer) is a process of remotely verifying customer identity through online or digital technologies, helping service-providing companies to quickly and reliably verify and authenticate customer identity. eKYC technology is widely used in industries such as finance and banking, contributing to increased business efficiency and reduced operational costs.

Further Reading: KYC Becomes eKYC with the Addition of Facial Recognition in the BFSI Industry
Learn More

However, during the process of remotely collecting customer facial images, deepfake technology poses potential security risks. Hackers may exploit the following methods to attack eKYC systems:

File selection attack

File selection attack:

If the eKYC digital identity verification process allows customers to upload their own photos, hackers may tamper with identity documents and photos, or even use deepfake technology to create fake photos for identity impersonation. In this case, service providers find it difficult to effectively detect these forged images, not only requiring significant manpower to verify the authenticity of the data but also facing a higher risk of financial crime.

Photo and video reproduction attack:

To enhance security, some eKYC processes now require users to take real-time selfies using their cameras to reduce the risk of hackers impersonation using stolen documents. However, hackers can still utilize computers or tablets to display fake facial photos or videos, or even generate corresponding dynamic videos using deepfake technology to deceive through a mobile phone camera. This type of impersonation attack is referred to as a "Presentation Attack" (PA). Although service providers can rely on real-time manual reviews to block such attacks, this undoubtedly significantly increases the workload for reviewers and poses challenges in terms of efficiency.

Camera signal injection attack:

Finally, hackers may also infiltrate the devices running the eKYC application or website and inject streams of fake images generated by deepfake technology. This means that applications or websites executing eKYC digital identity verification may mistakenly identify these images as real individuals in a live setting, while in reality, all these images are fabricated. This type of attack has a higher technical threshold, significantly increasing the difficulty of identity impersonation, but it also makes prevention more complex. Although this attack method has not yet emerged on a large scale, financial institutions and other companies that require eKYC verification should proactively recognize these risks and take appropriate protective measures.

How does FaceMe block deepfakes?

In the FaceMe eKYC digital identity verification process, FaceMe employs multiple strategies to effectively prevent deepfake and other potential attacks, ensuring that the results of digital identity verification are highly credible, reliable, and secure. FaceMe's solution includes:

FaceMe: the best choice for comprehensive anti-spoofing and deepfake detection

In summary, during the eKYC authentication process, it is essential to prevent identity impersonation attacks using deepfake technology, as well as to address other forms of attacks, such as presentation attacks (PA), forged identity documents, and counterfeit facial masks. FaceMe provides a comprehensive anti-fraud solution that can withstand various identity impersonation attacks, offering reliable protection for the financial industry and all service providers that require digital identity verification.

FaceMe not only helps these institutions comply with regulatory requirements but also further ensures the high security of accounts and transactions, thereby establishing a safer and more trustworthy digital identity verification system. Choosing FaceMe means choosing reliability and security, ensuring that your business can operate steadily in the face of various identity impersonation risks.

Our comprehensive anti-spoofing solutions will provide secure protection for your digital identity verification.