THE Fraud Trend to Watch in 2026 and Beyond

The Era of Deepfakes and Synthetic Identities

Fraud Trends
  • 16/02/26

No search results

The fraud trend defining 2026 isn’t a new attack vector, but one that has been building and evolving over the last few years. The landscape of financial fraud is evolving at an unprecedented pace. As we enter 2026, financial institutions (FIs) face increasingly sophisticated threats that leverage cutting-edge technologies. Our work across the industry shows that synthetic identities, deepfakes and their role in advanced scams and account takeovers remain some of the most prevalent threats with the integration of artificial intelligence (AI) into fraud tactics reshaping the contours of financial crime.

In this article, we explore this fraud trend, offering insights and actionable strategies to help FIs fortify their defenses and stay ahead in the ongoing battle against fraudsters.

Synthetic Identities: The Invisible Fraudsters

Synthetic identity fraud is one of the fastest-growing threats we’re seeing in the financial and payment services industry. What makes them so dangerous? They are invisible in a way that traditional fraud isn’t. Because the identities don’t correspond to any real individual, fraud detection systems that rely on existing credit histories, known customer data, or simple identity verification often fail to raise red flags. Fraudsters can build fake credit histories over time, making these synthetic profiles appear increasingly credible. Then, they exploit these identities to open multiple accounts, apply for loans or credit cards, rack up balances, and disappear without paying.

In 2026, we’re expecting synthetic fraud losses to continue to escalate sharply. The rise of AI-powered data generation tools enables fraudsters to automate the creation of plausible identities at scale. They can generate convincing names, mix and match pieces of breached data, and fabricate supporting documents or online footprints to pass identity checks. This flood of synthetic applicants threatens to overwhelm financial institutions that lack sophisticated verification methods.

Moreover, the trend towards fast, frictionless digital onboarding contributes to the problem. Customer identification and verification processes that prioritize convenience over thoroughness, for instance, relying solely on document scans or single-point verification often fail to detect synthetic identities.

In many cases, the fraud scheme only becomes apparent once credit limits are reached or fraudulent transactions surface, long after onboarding. Not only this, expansive and destructive mule rings can also pop up being made solely of synthetic identities and no one to hold accountable.

Deepfakes: The New Face of Deception

Deepfake technology has moved from science fiction to financial fraud reality within a few short years. Powered by advances in artificial intelligence and neural networks, deepfakes we’re currently seeing synthesize highly realistic videos, images, or even audio recordings that convincingly mimic real people’s faces, voices, and mannerisms.

Initially popularized in entertainment and social media, deepfakes are now rapidly becoming a powerful tool for financial fraud, targeting large corporations with sophisticated impersonations. Among the most notorious cases, we can mention an employee of a multinational engineering firm authorising the transfer of over $25 million after participating in a video call where all other parties, including a senior officer of the company, were sophisticated deepfake impostors. Another example is a wave of attacks targeting Italy's corporate elite, by using deepfakes of a high profile politician convincing victims to transfer funds to help free journalists allegedly detained abroad, exploiting their sense of patriotism. On a lesser financial scale but still important, are scams like grandparents scams where videos from social media are used to fake voice notes to have family members pay ransoms etc.

Fraudsters are also deploying deepfakes in ways that pose direct threats to identity verification, customer onboarding, and social engineering scams. Many FIs employ facial recognition or voice ID during customer onboarding as a trust-building measure. AI-generated deepfake videos and voice recordings can bypass these checks, tricking systems into granting access to accounts or services under false pretenses.

What makes deepfakes particularly dangerous is their growing realism and accessibility. AI tools are becoming easier to use and more affordable, leveling the playing field for less sophisticated criminals. In addition, deepfake content is increasingly difficult to distinguish from legitimate videos or audio without specialized detection systems.

These Deceptive AI Tools Help Scams Get Smarter

As we enter 2026, scams are becoming more sophisticated, frequent, and difficult to detect, driven largely by innovations in AI that we’ve discussed.

Phishing attacks, the oldest form of scam, have evolved beyond generic mass emails. Today’s fraudsters deploy AI to produce highly personalised phishing attempts that appear credible

and relevant and can be leveraged across any country, in any language. By harvesting data from social media profiles, breach databases, and other online sources, AI-generated messages mimic styles, habits, and even the typical language of the victims or their contacts. This level of customization drastically increases click-through rates and the likelihood of credential theft.

Additionally, scam bots that utilize natural language processing (NLP) can engage in real-time conversations with victims across chat platforms, email, and even phone calls. These bots can create convincing, human-like interactions, and pressure victims into divulging sensitive information or completing fraudulent transactions. Unlike traditional scams that rely on static messages, these bots create dynamic, personalised dialogues in any language, making detection harder.

Last but certainly not least, one increasingly common strategy is engagement bait: scammers post AI-generated content on social media that is used to identify users who interact by liking, commenting, or sharing. They are then targeted with more advanced and personalised scams as they will be more likely to become a victim to a scam.

Fighting Back with AI for Good

Artificial intelligence lies at the heart of the fraud battleground in 2026. It has become a double-edged sword, simultaneously empowering criminals and fraud fighters.

Whilst AI is not a new concept in fraud detection, it is playing an increasingly central role in fraud detection. Specialized AI models are now being employed to detect deepfakes during onboarding or transaction approval stages, helping to combat rising impersonation scams. We’ve also been seeing machine learning models excel at spotting subtle nuances in customer behaviour that might signal the customer is acting under pressure or influence and would be missed during a manual review by a human analyst.

AI underpins deepfake detection software, sentiment and language analysis for scam detection, and biometric recognition systems that go beyond static identifiers to include behavioural traits. Vendors who provide services that detect deepfakes are under enormous pressure to ensure today’s technology will stand up to tomorrow’s threats. Particularly because once a fraudster gets through the front door, downstream controls often are not sufficient as they detect behaviours in silos. This is something that can only be tackled by unifying the risk platform and having omnichannel visibility and capabilities.

So, whilst strengthening identity verification processes remains crucial, FIs are encouraged to move beyond basic checks and leverage multiple, authoritative data sources, including government records and digital footprints, to confirm customer identities. Behavioural

biometrics, measuring how users interact with devices, also offers promising results in detecting fabricated profiles. All of these different sources, orchestrated in the right way, give a 360 degree view of customer behaviour and immensely increase the opportunities at which to intervene.

The key take away here is that monitoring cannot stop at onboarding. Continuous scrutiny of account activity to detect suspicious patterns, such as rapid micro-payments or unusual transfer destinations, that may signal scams or account takeovers is necessary. When high-risk transactions are flagged, verifying them through independent channels adds an extra layer of protection. This is only possible when FIs have technology that works faster than the speed of fraud.

Deploying AI is certainly not without challenges. A robust platform must be able to work in real-time and see a single 360 degree view of the customer. For example, Synthetic IDs look legitimate in isolation, a recent graduate, modest income, small credit limit. The pattern however may reveal something far more nefarious - thin credit file, suddenly requesting multiple cards, addresses that don't match utility records, device fingerprints shared across ‘different’ customers, behavioural signals showing bot-like interaction rather than human banking patterns.

The Future of Fraud Prevention

As we begin 2026, the fraud landscape facing financial institutions has never been more complex. New technologies like AI and deepfakes offer criminals novel attack modes, while synthetic identities blur the lines between legitimate and fraudulent customers, and scams are evolving in speed and sophistication. FIs also need to prepare for upcoming regulatory changes, particularly the EU’s PSD3 and PSR, which will introduce stricter requirements around fraud prevention, customer authentication and incident reporting.

Navigating the complex and rapidly evolving fraud landscape of 2026 demands not only advanced technology but also strategic expertise and tailored solutions. PwC, with its deep industry knowledge and global consulting experience, helps financial institutions develop comprehensive fraud risk management frameworks. Complementing this, Feedzai offers a cutting-edge platform specifically engineered to detect and prevent financial crime in real time. Together, PwC’s strategic insight and Feedzai’s technological innovation empower clients to build resilient, future-proof defenses, combining human intelligence with AI precision to outpace fraudsters. If you are interested in discussing any of these emerging fraud trends or exploring how your organization can strengthen its defenses, feel free to reach out.

Zůstaňte s námi v kontaktu

Hledáte experta, který Vám pomůže; chcete poptat naše služby; nebo se zkrátka na něco zeptat? Dejte nám o sobě vědět a my se Vám co nejdříve ozveme zpátky.

Beru na vědomí, že vyplněním formuláře budou poskytnuté osobní údaje v něm obsažené zpracovávány entitami ze sítě PwC uvedenými v části „Správce údajů a kontaktní údaje" v prohlášení o ochraně osobních údajů v souladu s příslušnými zákonnými ustanoveními (zejména Nařízením Evropského parlamentu a Rady (EU) 2016/679 ze dne 27.dubna 2016, obecným nařízením o ochraně osobních údajů (GDPR), a zákonem č. 110/2019 Sb., o zpracování osobních údajů, v platném znění) na základě oprávněného zájmu výše uvedených entit ze sítě PwC pro účely vyřízení mého požadavku.
Přečtěte si, prosím, naše prohlášení o ochraně osobních údajů, kde se dozvíte více o našem přístupu k osobním údajům a o vašich právech, zejména právu vznést námitku vůči zpracování.

Skrýt