Scams in the Age of AI: A Wake-Up Call for Financial Institutions

Scams in the Age of AI
  • 09/10/25

No search results

Fraudsters are master adapters. They have continually evolved to circumstances and followed the path of least resistance. However, with access to AI, they're not just adapting; they're forging new territory. Scams are now faster, more convincing and harder to detect than ever before.

Feedzai's 2025 AI Trends in Fraud and Financial Crime Prevention report found that more than half of reported scams in the banking sector already involve some form of AI. Criminals are using deepfakes, voice cloning and synthetic identities as part of their everyday tools. These technologies make it possible to impersonate customer service agents in real time, generate fake identities capable of bypassing traditional verification systems, enabling them to open fraudulent accounts, secure loans and launder money with alarming ease.  

Regulators are actively working to keep pace with these developments by updating frameworks and strengthening enforcement measures. As AI continues to advance and become more accessible, banks and their customers are likely to encounter increasingly complex challenges, highlighting the need for ongoing adaptation and resilience. 

 

AI-Powered Attacks  

The rapid evolution and accessibility of AI have given scammers a powerful new weapon that significantly escalates the threat of financial crime. Fraudsters now exploit generative AI to craft highly convincing phishing attacks, often personalized with readily available online information about their targets. These attacks extend far beyond email, usually impersonating customer service agents, with full backstories and requests that appear genuine. 

The most popular AI-driven threats include: 

  • Real-time Impersonations: Fraudsters mimic the behaviour and communication style of trusted individuals during live interactions, making detection extraordinarily difficult. 
  • Voice Cloning: AI-generated voices replicate familiar tones with striking accuracy, enabling a particularly potent threat used for emotional manipulation at scale. 
  • Synthetic identities: Criminals combine real and fabricated data to create new, credible identities capable of bypassing verification, opening accounts, obtaining loans and laundering money. 

As AI continues to advance, these attacks will only become increasingly sophisticated. Combatting them requires a proactive and multi-layered defense.  

In response to the growing threat of AI-enabled scams, financial institutions are fighting AI with AI, turning to machine learning fraud prevention tools.  

 

From Silent Generation to Gen Alpha: How Fraud Targets Each Generation 

Scams do not affect all age groups in the same way. Each generation's relationship with technology shapes their exposure to risk and their ability to recognize fraud.  

Gen Z and Millennials 

Younger generations, particularly Gen Z and Millennials, are frequent adopters of mobile payment apps like Venmo or Cash App, cryptocurrency platforms and buy-now-pay-later services. Their comfort with these technologies can make them vulnerable to cleverly disguised scams on social media, messaging apps, or through influencer impersonations. Fake investment opportunities in crypto or non/fungible tokens (NFTs) often target this demographic, exploiting their fear of missing out (FOMO) on the next big thing. 

Gen X and older Millennials 

Straddling both traditional and digital financial tools, Gen X and older Millennials are frequent targets of phishing emails and workplace-themed scams that exploit busy schedules and divided attention. 

Baby Boomers and the Silent Generation 

Baby Boomers and the Silent Generation typically prefer traditional banking methods, making them more likely to use credit cards or make payments through in-person bank transfers. They may prefer postal payments over digital wallets or mobile banking apps. Being more cautious with technology hasn't spared Baby Boomers from the scamdemic. They are often targeted by emotionally manipulative schemes and tactics, such as fake bank calls, romance scams, or impersonation fraud involving family members. This group is particularly susceptible to voice cloning, which is an incredibly convincing way to exploit emotional connections. For example, the "grandparent scam," where fraudsters impersonate a grandchild in distress, has become more convincing with AI-generated voices. 

Across generations, the vulnerabilities differ: younger consumers may overlook risks due to overconfidence or digital fatigue, while older users may struggle to verify authenticity quickly. Fraudsters tailor their methods accordingly, from fake shopping apps for younger demographics to tech support scams for seniors. 

To address this diversity of risk, banks and regulators must adopt a multi-faceted strategy: 

  • Develop age-specific awareness and education programs. 
  • Implement robust, universal verification processes across channels. 
  • Apply AI-based fraud detection with transparency and explainability to meet evolving requirements, such as PSD3. 

By aligning defenses with generational behaviors and financial preferences, institutions can deliver more effective protection and education, while regulators ensure consistent safeguards. A nuanced and collaborative approach will be critical to counter scams now and as they continue to evolve. 

 

AI-Powered Defense Strategies 

Around 90% of financial institutions now use some form of AI for real-time monitoring and detection. These systems analyze transaction patterns, login behavior, device usage and biometric signals, such as typing rhythm and navigation habits. The goal is to catch fraud as it happens, before any damage is done, while minimizing false alarms that frustrate legitimate customers.  

To strengthen defenses, banks are advancing in three key areas: 

  • Explainable AI (XAI): Ensures transparency by clarifying why a transaction was flagged, supporting internal audits and regulatory compliance.  
  • Federated learning: A privacy-conscious way for banks to collaborate on fraud models without sharing raw data, an approach that is becoming especially important under tightening regulations on data use. 
  • Generative AI: Used to simulate emerging fraud scenarios and stress-test defenses, helping institutions anticipate new attack methods. 

Combined, these innovations reflect a shift toward more adaptive, transparent and collaborative defenses in the fight against financial crime. 

Beyond technology, the financial services sector is moving toward collective defense. Financial institutions, fintechs and regulators are seeking a more structured and consistent approach to fraud, with shared threat intelligence platforms and industry-wide AI governance standards beginning to take shape. This collaborative approach is considered essential for countering rapidly evolving, highly networked fraud operations. 

 

Global Regulatory Momentum & Coming Rules 

Regulators worldwide are intensifying their efforts to counter AI-driven fraud, with new rules emphasizing stronger detection, consumer protection and industry collaboration.  

  • European Union: The transition from Payment Services Directive 2 (PSD2) to the upcoming Payment Services Directive 3 (PSD3) marks a significant shift in addressing emerging digital threats. While PSD2 focused on opening up banking APIs and enhancing online payment security, PSD3 aims to address the challenges posed by AI and advanced technologies in the financial services sector. This new framework is expected to come into force between late 2025 and early 2026. It should introduce more stringent requirements for fraud detection, customer authentication and data protection in an AI-driven environment. 
  • UK: The Payment Systems Regulator (PSR) is taking a proactive stance, pushing for enhanced industry collaboration and increased adoption of AI in fraud detection and prevention. According to UK Finance, as British regulators grapple with annual fraud losses exceeding £1 billion, they are advocating for robust consumer protection measures and improved fraud reporting mechanisms, particularly in the realm of authorized push payment (APP) scams. 
  • Australia: The Scam Prevention Framework introduces tough obligations for banks and digital platforms, with hefty penalties for noncompliance. The move reflects a broader global trend toward stricter, technology-aware oversight. 
  • United States: The SEC has tightened cybersecurity disclosure rules for public companies, requiring them to report material cyber incidents within four business days. Companies must also disclose their cybersecurity governance and risk management practices in annual filings. These changes aim to improve transparency and investor protection, while ensuring companies are prepared to respond to evolving digital threats.  
  • Singapore: The Monetary Authority has introduced guidelines emphasizing fairness and transparency in AI-driven financial services.  

Beyond national regulators, global bodies such as the Financial Action Task Force (FATF) are updating their recommendations to encourage risk-based approaches, rather than blanket mandates. This model requires institutions to assess their unique exposure and apply proportionate controls to ensure that resources are directed where risk is highest. The risk-based model also enables more flexible, targeted and efficient responses, allowing organizations to allocate resources where they are most needed and adapt swiftly to evolving threats, including those introduced by innovations such as digital assets, AI and decentralized finance. 

These diverse approaches share a common goal: addressing the shift from large-scale, single-target attacks to more pervasive, technologically sophisticated schemes affecting a broader range of consumers. Regulators are no longer reacting in a piecemeal fashion to fraud but are instead building AI-conscious frameworks designed for speed, adaptability and resilience. 

 

The Future Shaped by AI and Regulations  

The future of scam prevention will be defined by speed, adaptability, AI understanding, and unprecedented cross-sector collaboration. As AI-generated scams become increasingly sophisticated, detection tools must evolve in parallel, combining advanced analytics with an understanding of human behavior across generations. 

Tomorrow's scams will be increasingly personal. AI will eventually learn to mimic not just voices and faces, but also how people think, speak and behave. This inevitable development raises new risks and could prove extremely dangerous. Younger users may struggle to distinguish between social content and manipulation, while the trust of older generations will continue to be weaponized through emotional triggers. At the same time, AI sophistication may backfire by making people suspicious of everything, including real emergency calls from loved ones, genuine business messages, or actual news, simply because they might be AI-generated. In this scenario, banks will need to rethink how they maintain customer confidence.  

Financial institutions that succeed in this environment will balance three priorities: 

  1. More intelligent detection: Moving beyond static rules to dynamic, multi-signal intelligence that adapts in real time, including behavioral analytics (e.g., typing rhythm, swipe patterns), contextual risk scoring across transactions and devices, federated intelligence that shares insights without exposing raw data and even generative AI simulations to stress-test defenses. The goal is to stop fraud in the flow of activity before funds leave an account or a customer is manipulated into authorizing a transfer. 
  2. User-centered design: Security should be intuitive, transparent and frictionless for legitimate customers. Instead of rigid, one-size-fits-all authentication (like one-time passcodes for every action), banks can use continuous behavioral monitoring in the background to assess risk silently. A routine bill payment on a trusted device might pass through seamlessly, while a high-risk transfer triggers a clear, user-friendly prompt with multiple verification options. This approach balances protection with convenience, building customer trust rather than eroding it. 
  3. Regulatory alignment: Meeting requirements for AI explainability and auditable decision-making while complying with PSD3, UK rules and global standards. 

No single bank can fight scams alone. Sharing intelligence and working within new regulatory frameworks is the only way to build defenses strong enough to outpace AI-driven fraud and to preserve customer trust. 

 

Moving from Scams to Trust 

Scams are evolving fast, powered by AI that makes deception easier and more believable. Regulators are tightening their grip, but the real test will be how quickly financial institutions adapt. 

Collaboration will be the touchstone of tomorrow's fraud leaders. Working together and deploying intelligent tools and systems designed for speed, transparency and trust is how we protect customers going forward. AI will help banks stay current with fraud trends, but, more importantly, it will keep them ahead of the curve. 

PwC and Feedzai have come together to combine deep industry expertise with the most advanced AI-native platform in financial crime prevention. Together, we're helping institutions strengthen defenses, redesign operations and protect customers at a time when trust has never been harder to earn or more valuable to keep. 

Zůstaňte s námi v kontaktu

Hledáte experta, který Vám pomůže; chcete poptat naše služby; nebo se zkrátka na něco zeptat? Dejte nám o sobě vědět a my se Vám co nejdříve ozveme zpátky.

Beru na vědomí, že vyplněním formuláře budou poskytnuté osobní údaje v něm obsažené zpracovávány entitami ze sítě PwC uvedenými v části „Správce údajů a kontaktní údaje" v prohlášení o ochraně osobních údajů v souladu s příslušnými zákonnými ustanoveními (zejména Nařízením Evropského parlamentu a Rady (EU) 2016/679 ze dne 27.dubna 2016, obecným nařízením o ochraně osobních údajů (GDPR), a zákonem č. 110/2019 Sb., o zpracování osobních údajů, v platném znění) na základě oprávněného zájmu výše uvedených entit ze sítě PwC pro účely vyřízení mého požadavku.
Přečtěte si, prosím, naše prohlášení o ochraně osobních údajů, kde se dozvíte více o našem přístupu k osobním údajům a o vašich právech, zejména právu vznést námitku vůči zpracování.

Skrýt