How financial institutions can prepare for the emerging deepfake threat

Article Sep 3, 2025 Read time: min
By Maryam Hosseini

Note: This is a guest-contributed article. Maryam Hosseini is Senior Director, IAM Strategy and Product Management, Global Security for Royal Bank of Canada.

Financial institutions often rely on facial and voice recognition to authenticate the identity of customers—whether those customers need to access an online banking application or to talk with a call center representative. But these forms of authentication are becoming increasingly susceptible to malicious parties that use generative AI to create convincing replicas of faces and voices, known as deepfakes.

The problem is surprisingly widespread. In November of 2024, FinCEN issued an alert to help financial institutions identify fraud associated with deepfakes created with generative AI. Gartner predicts that by 2026, 30% of enterprises will consider identity verification and authentication solutions unreliable in isolation, due to the prevalence of AI-generated deepfakes.

As generative AI becomes more sophisticated and difficult to detect, AI-enabled crime will become harder to prevent. If we don’t act to implement appropriate defenses or detection mechanisms, we can expect that more bad actors will use this new technology to bypass authentication mechanisms.

Cybersecurity today is at an historic inflection point, punctuated by a series of converging threats that include deep fakes. This is why banks and other financial institutions should act now, adding stronger security and better customer awareness to stay ahead of this emerging threat.

As cybersecurity professionals, we must be alert to both the short- and long-term risks in our environments. While sophisticated deep fakes are rare, they will only become more pervasive and more convincing as the tools to create them become more widespread and computing power becomes more accessible.

To make sure our defenses are effective against deepfakes, the framework below may be helpful. While my experience is in the financial sector, it’s my hope that this guide will be useful to others as well. Today’s cybercriminals tend not to specialize by industry, taking advantage of vulnerabilities wherever they find them.

Face detection AI technology, facial recognition security user identification access, Asian girl using smartphone commuting travel city, scanning sensor environment surrounding, 3d model wireframe.
Generative AI is rewriting the rules of detection—demanding a new approaches to crime prevention.


1. Assess the risk

Begin with a systematic approach to identifying and assessing the risk of applications in your environment, as well as the tools used to support your business (such as phones, web conferencing, etc.). These three factors may help you evaluate your risk.

  • Function: Applications and tools that facilitate access to funds or perform large transactions pose a higher risk. A document-signing application that replaces in-person signatures is likely lower-risk.
  • Breadth: An external-facing application is more vulnerable than one that is internal-facing. Applications that support a large client base or are used by a significant number of employees may be considered higher-risk.
  • Data: Evaluate the data that resides on each application. If an application stores sensitive client or business-related data, that indicates a higher risk profile and the need for enhanced security.

2. Apply additional security where necessary

Based on the risk profile of your application or tool, the next step may be to apply measured and appropriate security defenses. This can be accomplished through various means, such as biometric authentication services, multi-factor authentication, and other layered approaches.

Many organizations use biometric authentication services provided by vendors or native device authentication services. Although the most widely-used native device authentication services, such as Apple Face and Android Biometrics, are extremely difficult to circumvent, studies have demonstrated the possibility of bypass. You’ll want to verify that these services are performing within acceptable performance thresholds and risk tolerances.

Let’s look at the biometrics that deep fakes can generate, as well as potential defenses.

Voice attacks

These are not new, dating at least from voice replay attacks. But it’s much easier to create a deepfake voice today, and the quality of these impersonations is improving. Defenses against these threats can include:

  • Establishing safe words or phrases that only an authorized user would know
  • Detection of caller ID spoofing and SIM swaps
  • Automated correlation of phone and identity information
  • Switching to another channel for additional verification
Deepfake photos and videos

In a presentation attack, a deepfake photo or video is shown to a camera. In an injection attack, digital content such as biometric information is injected directly into an authentication service. Defenses against these threats can include:

  • Heightened image analysis, which can track key points on a face, detecting inconsistencies in movements. It may also detect irregularities in blinking and unusual pixel patterns. The image noise in deepfakes is also often slightly different than that of genuine images.
  • So-called liveness detection examines characteristics such as motion blur and texture patterns.
  • Metadata detection uses the data embedded in media files to trace the file’s origin and authenticity.
  • Third party detection tools
     
Asian women using the technology tablet for access control by face recognition in private identification step when online shopping with the credit card, credit card mockup, online payment concept,
Some organizations are turning to passwordless authentication to enhance security.
Choosing the right type of security

When implementing multi-factor authentication, beware of implementations that add another layer but are not truly multifactor.  For example, if your clients are using passwords, and you also ask them for their dog’s name, you’ve used the same factor—something they know – twice. It’s relatively easy to look through social media and collect personal information that can be used to answer these types of security questions.

Some organizations are turning to passwordless authentication to reduce these risks. Rather than requiring users to memorize a password, this method relies on other methods that can include system device authentication, one-time passwords sent to a user’s email or phone, push notifications to other devices, passkeys, or biometric identifiers.

3. Keep up with regulatory guidelines

Regulatory guidelines and requirements can sometimes fall behind the state of the security threats that professionals are facing. But the guidelines can still offer valuable information. Regulators are often looking at a broader threat landscape than any individual or institution, so their guidelines can be useful in helping you stay aware of emerging threats.

Regulatory guidelines may also help your organization keep up with industry standards, because those guidelines give you a window into how other financial institutions are operating. Your security measures need to be at least in line with those of others in the industry. Otherwise, your organization may be considered an easier target.

Conclusion

The increasing prevalence and sophistication of deepfakes poses an additional challenge to security organizations, as deepfakes can be used to deceive authentication services. As deepfakes become more widespread and more convincing, every organization needs to ensure that its defenses and systems can detect and thwart these attacks.

To improve the ability to withstand a deepfake attack, organizations should assess their unique application risk landscape and apply a strong layered multifactor authentication approach, providing additional security when required. While security will always be a bit of a cat-and-mouse game, implementing additional security now, where appropriate, can enable enterprises to take an important step ahead of attackers.

Maryam Hosseini is Senior Director, IAM Strategy and Product Management, Global Security for RBC.