battling synthetic identity fraud: strategies for banks in a deepfake world

With the advent of generative AI and the simultaneous rise of digital onboarding and other online verification processes, banks face an escalating threat: synthetic identity fraud. Fraudsters now deploy deepfake videos, AI-generated documents and virtual personas to infiltrate financial systems at scale. Leading banks report that 50% of them experienced a rise in fraud during 2024.


what is synthetic identity fraud, and why it matters

Synthetic identity fraud occurs when criminals combine real and fabricated data, such as a child’s Social Security number with a new name and manufactured documents, to create entirely new identities. These identities are then used to open accounts, build credit and execute fraud long before detection. Because they are not linked to a real person, the victims may only discover the crime years later. Traditional KYC systems struggle to spot these constructions.

Moreover, using generative AI and deepfake technologies, fraudsters can now mimic facial features and voices, or craft highly convincing ID documents. One recent report found that synthetic ID document fraud surged by over 300% in North America in Q1 2025, while deepfake fraud grew 11 times. The speed and scale at which these threats arise demand a shift in how banks defend themselves.

Why banks should be vigilant:

  • Digital onboarding is a key attack surface. Approximately 62% of banks cite online account opening as the top trend that increases fraud exposure.
  • Regulatory pressure and loss potential are rising. Reported fraud losses in 2024 exceeded US $12.5 billion — a 25 % increase over 2023 — with much of the surge attributed to deepfakes and AI‑generated documents.
  • Trust is eroding. A recent survey claimed that 78% of US adults are worried about deepfakes in financial fraud, yet less than half feel current verification systems can stop them.

Vulnerabilities that once seemed niche are now threats to foundational trust and business continuity.


advanced strategies for mitigation

Navigate the complexities of financial crime | Explore tailored financial crime compliance

Navigate the complexities of financial crime | Explore tailored financial crime compliance

Here are key strategic actions banks should implement to defend against synthetic identity fraud:


adopt continuous and multi‑factor identity verification.

Deploy biometric matching, device and behavioural analytics, and real‑time anomaly detection to spot fake or fast‑maturing identities.


leverage AI for both detection and defence

Over 50% of fraud now involves AI or deepfakes, and 90% of banks are already using AI tools to fight this threat. Employ machine‑learning models tuned to detect deepfake‑style spoofing and synthetic‑identity patterns.


integrate closed‑loop fraud networks

Share anonymised data within networks that allow banks to detect identity reuse, account stacking or synthetic evolution across institutions.


strengthen onboarding thresholds and challenge flows

Introduce additional verification when high‑risk signals appear (e.g., a new customer using a device with no history, or an email created within hours).


improve KYC and AML frameworks

Upgrade static document checks with dynamic, contextual risk scoring. Generative AI is capable of producing realistic documents and identities, so banks must assume the standard document check is insufficient.


build fraud‑aware culture and governance

Ensure all levels of the organisation understand synthetic identity fraud, deepfakes and their business impact. Governance frameworks should include regular scenario‑testing, audit logs of identity verification and escalation paths for AI‑based anomalies.


implementation challenges and industry response

While the strategies above are clear, banks face implementation hurdles:

  • Data silos and integration issues remain the biggest barrier: 87 % of fraud teams cite fragmented data sources as a major challenge to deploying advanced AI models.
  • Ethical and regulatory concerns surround using AI: Institutions must balance risk detection with fairness, transparency and data‑privacy obligations.
  • The speed of innovation by fraudsters is relentless: Threat actors evolve fast, and security tools that were state‑of‑the‑art 12 months ago may already be bypassed. Continuous monitoring and adaptation are essential.

In response, many institutions are collaborating with fintech platforms, identity‑verification specialists and regulatory bodies to share intelligence and build standardised defences. Closed‑loop networks and federated fraud databases are gaining traction as collective tools.


strategic recommendations for banks’ leadership

For bank executives looking to take decisive action:

  • Invest in identity infrastructure that spans onboarding, account‑maintenance and transaction monitoring, focusing on behaviour, device and network signals rather than documents alone.
  • Deploy AI‑native fraud systems that can evolve and self‑adapt to new deepfake and synthetic‑identity techniques, while maintaining human‑in‑the‑loop oversight.
  • Build partnerships across the ecosystem, including fintechs, regulatory sandboxes and identity‑verification networks, to exchange fraud intelligence, develop standards and stay ahead of techniques.
  • Elevate risk metrics and reporting at the board level to reflect the impact of synthetic identity fraud and deepfakes to ensure proper resourcing and oversight.
  • Review your culture and training. Ensure staff recognise deepfake and synthetic‑identity risk vectors, including voice‑cloning, spoofed documents and social‑engineering tactics.

how can Infosys BPM help protect against synthetic identity fraud?

Synthetic identity fraud is a fast-growing threat to banking institutions worldwide. At Infosys BPM, we support clients in designing these capabilities and ensuring that their fraud‑prevention strategies remain agile, comprehensive and future-ready. Explore how you can transform identity verification and build trust by leveraging tailored financial crime compliance offerings for businesses by Infosys BPM.


Frequently asked question

  1. Why is synthetic identity fraud particularly dangerous for banks in a deepfake-driven environment?
  2. Synthetic identity fraud combines real and fabricated data with deepfakes and AI-generated documents, allowing criminals to pass onboarding, build credit, and execute large frauds before detection, often without a directly identifiable victim.


  3. How can continuous and multi-factor identity verification help detect synthetic identities earlier?
  4. Combining biometrics, device intelligence, behavioural analytics, and continuous authentication turns identity from a one-time check into an ongoing assessment, making it easier to spot fast-maturing or inconsistent profiles that indicate synthetic identities.


  5. What role should AI play in defending against synthetic identity and deepfake fraud?
  6. AI models can analyse high-volume, high-velocity data to detect anomalies, deepfake artefacts, and synthetic identity patterns that rule-based systems miss, and can adapt as attackers change their tactics.


  7. Why are closed-loop fraud networks and data sharing important for banks?
  8. Closed-loop networks and shared fraud intelligence help banks see identity reuse, account “stacking”, and synthetic evolution across institutions, reducing the chance that a synthetic identity succeeds by moving between providers.​


  9. What implementation and governance challenges should bank leaders plan for when strengthening defences against synthetic identity fraud?
  10. Banks must address data silos, model explainability, privacy and fairness requirements, and the need for ongoing tuning and training so fraud controls stay effective as synthetic identity and deepfake techniques evolve.