Fraudsters are master adapters. They have continually evolved to circumstances and followed the path of least resistance. However, with access to AI, they're not just adapting; they're forging new territory. Scams are now faster, more convincing and harder to detect than ever before.
Feedzai's 2025 AI Trends in Fraud and Financial Crime Prevention report found that more than half of reported scams in the banking sector already involve some form of AI. Criminals are using deepfakes, voice cloning and synthetic identities as part of their everyday tools. These technologies make it possible to impersonate customer service agents in real time, generate fake identities capable of bypassing traditional verification systems, enabling them to open fraudulent accounts, secure loans and launder money with alarming ease.
Regulators are actively working to keep pace with these developments by updating frameworks and strengthening enforcement measures. As AI continues to advance and become more accessible, banks and their customers are likely to encounter increasingly complex challenges, highlighting the need for ongoing adaptation and resilience.
The rapid evolution and accessibility of AI have given scammers a powerful new weapon that significantly escalates the threat of financial crime. Fraudsters now exploit generative AI to craft highly convincing phishing attacks, often personalized with readily available online information about their targets. These attacks extend far beyond email, usually impersonating customer service agents, with full backstories and requests that appear genuine.
The most popular AI-driven threats include:
As AI continues to advance, these attacks will only become increasingly sophisticated. Combatting them requires a proactive and multi-layered defense.
In response to the growing threat of AI-enabled scams, financial institutions are fighting AI with AI, turning to machine learning fraud prevention tools.
Scams do not affect all age groups in the same way. Each generation's relationship with technology shapes their exposure to risk and their ability to recognize fraud.
Gen Z and Millennials
Younger generations, particularly Gen Z and Millennials, are frequent adopters of mobile payment apps like Venmo or Cash App, cryptocurrency platforms and buy-now-pay-later services. Their comfort with these technologies can make them vulnerable to cleverly disguised scams on social media, messaging apps, or through influencer impersonations. Fake investment opportunities in crypto or non/fungible tokens (NFTs) often target this demographic, exploiting their fear of missing out (FOMO) on the next big thing.
Gen X and older Millennials
Straddling both traditional and digital financial tools, Gen X and older Millennials are frequent targets of phishing emails and workplace-themed scams that exploit busy schedules and divided attention.
Baby Boomers and the Silent Generation
Baby Boomers and the Silent Generation typically prefer traditional banking methods, making them more likely to use credit cards or make payments through in-person bank transfers. They may prefer postal payments over digital wallets or mobile banking apps. Being more cautious with technology hasn't spared Baby Boomers from the scamdemic. They are often targeted by emotionally manipulative schemes and tactics, such as fake bank calls, romance scams, or impersonation fraud involving family members. This group is particularly susceptible to voice cloning, which is an incredibly convincing way to exploit emotional connections. For example, the "grandparent scam," where fraudsters impersonate a grandchild in distress, has become more convincing with AI-generated voices.
Across generations, the vulnerabilities differ: younger consumers may overlook risks due to overconfidence or digital fatigue, while older users may struggle to verify authenticity quickly. Fraudsters tailor their methods accordingly, from fake shopping apps for younger demographics to tech support scams for seniors.
To address this diversity of risk, banks and regulators must adopt a multi-faceted strategy:
By aligning defenses with generational behaviors and financial preferences, institutions can deliver more effective protection and education, while regulators ensure consistent safeguards. A nuanced and collaborative approach will be critical to counter scams now and as they continue to evolve.
Around 90% of financial institutions now use some form of AI for real-time monitoring and detection. These systems analyze transaction patterns, login behavior, device usage and biometric signals, such as typing rhythm and navigation habits. The goal is to catch fraud as it happens, before any damage is done, while minimizing false alarms that frustrate legitimate customers.
To strengthen defenses, banks are advancing in three key areas:
Combined, these innovations reflect a shift toward more adaptive, transparent and collaborative defenses in the fight against financial crime.
Beyond technology, the financial services sector is moving toward collective defense. Financial institutions, fintechs and regulators are seeking a more structured and consistent approach to fraud, with shared threat intelligence platforms and industry-wide AI governance standards beginning to take shape. This collaborative approach is considered essential for countering rapidly evolving, highly networked fraud operations.
Regulators worldwide are intensifying their efforts to counter AI-driven fraud, with new rules emphasizing stronger detection, consumer protection and industry collaboration.
Beyond national regulators, global bodies such as the Financial Action Task Force (FATF) are updating their recommendations to encourage risk-based approaches, rather than blanket mandates. This model requires institutions to assess their unique exposure and apply proportionate controls to ensure that resources are directed where risk is highest. The risk-based model also enables more flexible, targeted and efficient responses, allowing organizations to allocate resources where they are most needed and adapt swiftly to evolving threats, including those introduced by innovations such as digital assets, AI and decentralized finance.
These diverse approaches share a common goal: addressing the shift from large-scale, single-target attacks to more pervasive, technologically sophisticated schemes affecting a broader range of consumers. Regulators are no longer reacting in a piecemeal fashion to fraud but are instead building AI-conscious frameworks designed for speed, adaptability and resilience.
The future of scam prevention will be defined by speed, adaptability, AI understanding, and unprecedented cross-sector collaboration. As AI-generated scams become increasingly sophisticated, detection tools must evolve in parallel, combining advanced analytics with an understanding of human behavior across generations.
Tomorrow's scams will be increasingly personal. AI will eventually learn to mimic not just voices and faces, but also how people think, speak and behave. This inevitable development raises new risks and could prove extremely dangerous. Younger users may struggle to distinguish between social content and manipulation, while the trust of older generations will continue to be weaponized through emotional triggers. At the same time, AI sophistication may backfire by making people suspicious of everything, including real emergency calls from loved ones, genuine business messages, or actual news, simply because they might be AI-generated. In this scenario, banks will need to rethink how they maintain customer confidence.
Financial institutions that succeed in this environment will balance three priorities:
No single bank can fight scams alone. Sharing intelligence and working within new regulatory frameworks is the only way to build defenses strong enough to outpace AI-driven fraud and to preserve customer trust.
Scams are evolving fast, powered by AI that makes deception easier and more believable. Regulators are tightening their grip, but the real test will be how quickly financial institutions adapt.
Collaboration will be the touchstone of tomorrow's fraud leaders. Working together and deploying intelligent tools and systems designed for speed, transparency and trust is how we protect customers going forward. AI will help banks stay current with fraud trends, but, more importantly, it will keep them ahead of the curve.
PwC and Feedzai have come together to combine deep industry expertise with the most advanced AI-native platform in financial crime prevention. Together, we're helping institutions strengthen defenses, redesign operations and protect customers at a time when trust has never been harder to earn or more valuable to keep.
Hledáte experta, který Vám pomůže; chcete poptat naše služby; nebo se zkrátka na něco zeptat? Dejte nám o sobě vědět a my se Vám co nejdříve ozveme zpátky.