Move fast, think slow: How financial services can strike a balance with GenAI

Marathon runners running
  • Insight
  • December 08, 2023

Financial services organisations are moving forward with generative AI to enhance efficiency, customer experience, services and more. But finding the right approach can be challenging.

 

Artificial intelligence (AI) has turned a corner. Its latest manifestation, generative AI (GenAI), offers powerful, ultra-accessible new capabilities to financial institutions (FIs). And the race to put them to work is already underway.

We’re seeing organisations begin to use GenAI to assist in many mission-critical areas, such as customer due diligence, risk scanning, and reporting. This is all thanks to the technology’s ability to extract and aggregate data, automate report preparation, and highlight information gaps across a wide variety of non-standard documents, systems and media.

These advances in AI come at a pivotal moment for financial services as an industry. Our report on the future of banking examines what we perceive as universal challenges facing banks across the world over the next decade, including shifting customer expectations, rising expenses, regulatory complexity, new sources of risk and talent shortages. Asset managers and insurers are coming up against many of the same obstacles.

At the same time, historic drivers of the industry’s extraordinary growth and profitability—monetary and balance sheet expansion, globalisation, wealth concentration, and asset-value inflation—seem to be running out of steam. Although the rate rises of the past 18 months have provided an earnings boost for FIs globally, we are already seeing this get competed away.

For these reasons and more, the opportunities that generative AI presents to gain efficiencies, improve customer experiences and offer new services could not have come at a better time.

However, this is a marathon, not a sprint. Successfully adopting GenAI requires financial institutions to strike the right balance between prudence and urgency. There are real risks in pushing ahead too fast before putting critical skills, tools and capabilities in place. At the same time, going too slow could take organisations out of the running, especially when first movers are already realising value from the technology.

Leaders of financial institutions need to get smart about GenAI—not just generally, but specifically in the context of financial services—and hone their strategies to succeed.

The case for moving fast

The advances from generative AI, particularly the easy-to-use and general-purpose machine learning algorithms known as large language models (LLMs), are categorically different from advances offered by any other form of AI to date. Previous developments in AI excelled in narrow tasks, nearly always required technical talent to implement and affected only a subset of people’s work. GenAI can perform myriad tasks and, perhaps most importantly, operates through natural language. Communication is the most essential and uniquely human capability, one that sits at the root of general human intelligence itself. The ability to “talk” to GenAI like a person means it can be employed in almost any workplace, in many more functions than was previously considered practical. In fact, we’re seeing new use cases emerge every week.

As a result of this rapid growth, we expect to see practical applications proliferate much faster than with many previous rounds of technological innovation, which often took years to reach a tipping point. Indeed, we are already observing this acceleration in FIs all over the world, as well as in the industry’s extended ecosystem and value chain, as organisations experiment with applying generative AI to a multitude of tasks. Non-traditional players can also harness GenAI to find entirely new ways to compete in the financial services landscape, including offering innovative payment methods, embedded finance and more. As GenAI becomes more fully integrated with other technologies (such as robotics, symbolic computation, vision and voice) to enable the creation of AI-powered systems rather than just solutions, the potential grows exponentially.

Several commercial banks, for example, are working on tools to automate the preparation, aggregation and validation of documents for such things as trade finance, know your customer (KYC) efforts and compliance reporting. Others are already allowing developers to use GenAI to help them write the code that supports many of these activities. Banks, wealth managers and insurance companies are developing interactive avatars. Some avatars guide employees through libraries of internal policies and external regulations, and others help customers navigate “self-help” menus to simultaneously increase customer satisfaction and take pressure off contact centres. At least one major bank is reported to be experimenting with client-facing conversational tools that might go so far as providing general financial advice, including in such areas as portfolio construction and asset selection.

This rapid adoption of early use cases provides a powerful incentive for financial services leaders to expedite implementation of their GenAI strategies. Doing so is crucial not only for capturing immediate value and maintaining a competitive edge today but also for developing the capabilities to stay relevant tomorrow. For example, even simple applications, such as basic conversational analytics, can become building blocks for more complex ones later. These complex applications could include customising loan agreements, insurance policies and derivatives contracts; updating internal policies (e.g., to reflect changes in regulation or legislation, which can take months or even years today); and crafting role-purpose statements or employment agreements. However, without the necessary foundational elements in place and the organisational experience needed for their deployment, the effective and safe utilisation of more sophisticated applications could become challenging, if not unfeasible.

There’s another reason to get moving: your people won’t want to lose their edge. Our 2023 Global Workforce Hopes and Fears Survey found that 52% of workers were positive in some way about AI’s potential to improve their jobs and careers. Your employees will begin to see peers in other companies acquire valuable skills by incorporating the new technology into their daily work; if you don’t provide your workforce with the necessary means to keep up, they may be tempted to access GenAI tools via personal devices, so-called shadow usage, exposing your organisation to unanticipated risk.

Thinking slow to manage the risks

Of course, deploying GenAI introduces new risks, such as accidental exposure of customer data, inadvertent misuse of intellectual property, failure to recognise incorrect information and the perpetuation of societal bias. Such risks can result in customer harm, commercial loss, brand damage and even regulatory or legal sanction. As heavily regulated institutions, organisations providing financial services have special legal obligations that raise the stakes of missteps, including: 

  • fair-lending obligations that are broader and more consequential than ordinary anti-discrimination obligations in many jurisdictions
  • data-handling regulations that go above and beyond common consumer protection and privacy obligations
  • customer “best-interest” duties that place important restrictions on who can say what to whom. 

These obligations render GenAI a double-edged sword. On the one hand, well-designed AI models can support compliant behaviour and help catch errors, just as driver assistance can make roads safer. On the other hand, flaws in such a highly accessible tool can take small mistakes and replicate them in real time, and at scale.

And there’s more. In many jurisdictions, regulators are releasing new guidance and standards that explicitly control the use of AI, with which financial institutions will need to become familiar to ensure compliance. To add to this, they’ll be required to keep track of emerging risks from more sophisticated cybercrime and GenAI-enabled fraud; manage a lack of clarity in how the technology may impact ecosystem partners, suppliers, customers and employees; and navigate concerns about increased energy and water use (especially in data centres) impacting climate goals. All this should make it clear that simply letting a thousand flowers bloom with generative AI applications is an easy way for any financial institution to get itself into trouble.

Leaders of FIs will need to thoroughly familiarise themselves with every vector of risk and use a responsible AI toolkit that helps determine which GenAI applications can safely move forward while enabling effective risk management right from the start.

Striking the balance: Five steps to get moving

With all the risks at play, it’s understandable for FIs to adopt a posture that’s overly cautious. Unfortunately, like it or not, competitors are using these tools, and as they start delivering tangible value (or savings), your customers may not wait for you to catch up.

For all the risks of early adoption, the risks of not acting are at least as great. It’s time to start training people in a broad-based way, creating opportunities for safe experimentation and use, and demonstrating the capacity to capture value.

To that end, we see five critical steps to getting started in the GenAI marathon, which include practices that can help manage the risks. All are non-negotiable, and some may need repeating.

Step 1: Ensure alignment with enterprise strategy

Whether organisations pursue bottom-up idea generation in a hackathon, top-down directives that emerge from an executive off-site, or a mix, leaders should align their generative AI strategy with the broader business strategy by ensuring they have clear answers to three questions:

  • What are the most important business objectives we wish to achieve through the use of GenAI?
  • What are the boundaries of our risk appetite in achieving those objectives?
  • What additional constraints do we have that derive from our whole-enterprise strategy, including within the areas of environmental, social and governance (ESG); brand; investor and regulatory relations; and alliances and partnerships?

Examples of business objectives include improving productivity, quality, compliance and risk management, or creating a new revenue stream. Your choice will depend on your organisation’s particular context and strategy. Consider, for example, a market leader with saturated share: GenAI applications that leverage scale to maximise efficiency and productivity might be the most attractive low-hanging fruit. Meanwhile, a neobank looking to win customers and make its mark might be more focused on applications that create compelling or distinct customer experiences or services.

The nature of GenAI, however, will often enable you to address many objectives at once. Automating the preparation of credit assessment and loan-verification information, for example, enhances productivity, but also likely improves quality, streamlines the customer and employee experience, and may even increase revenue and market share (depending on the state of the loan market).

Risk appetite, as a strategic consideration, is self-explanatory and, of course, is also context-specific and different for every organisation. However, for financial services, we would expect to see much more scrutiny and caution at this time with any AI that is customer facing or that affects regulatory and legal obligations. Even more caution is warranted for any fully digitised end-to-end process.

Both your objectives and risk appetite will be influenced by your current alliance and partner strategy, even if it was formulated without GenAI in mind. In an area as new and fast-moving as AI, there can be no presumptive partner choices, no matter how deep and long-lasting existing relationships may be, and it’s worth applying extra scrutiny to “sole sourcing” arrangements at this time. In our own firm, GenAI has been the catalyst for new partnerships for applications such as preparing legal briefs, contract review, and the summary and analysis of customer conversations.

No matter the business objective, the risk profile or how outwards facing your generative AI activities are, they should be consistent with all aspects of the enterprise strategy, including your growth strategy, shareholder story, customer brand promise and employee value proposition. That consistency needs to be obvious and explainable not just for specialists in tech, but for all senior leaders and the board, a point we discuss in greater detail in our recent primer on the implications of GenAI for directors.

Step 2: Ready the organisation with training, guard rails and protocols

The applications and use cases we’re seeing today are only the most obvious at this early stage of generative AI. Many will involve the kinds of low-volume tasks that have historically been too complex to automate, too infrequent to justify reengineering away and often too mundane for senior leaders to know much about. These are the grains of sand in the gears across FIs, and the reason that simplification, digitisation and transformation have been so hard to achieve.

This kind of innovation won’t come from the top—it will be led by those closest to the work. FIs must ensure that all employees have access to appropriate universal training, just as they do in other areas that pose risk, such as security, customer protection, data handling and privacy. GenAI can have a role to play in deploying this training and capability enhancement via conversational training systems and support bots that can assist employees with guidance as they go about their day-to-day jobs.

At the same time, the culture at many FIs will need to change to enable GenAI innovation. Around half of CEOs surveyed by PwC said that their culture doesn’t tolerate small-scale failures or encourage dissent and debate—and two-thirds of employees agreed with that dim outlook. Yet trial and error will be an essential part of building these capabilities.

However, we’re not suggesting unrestrained experimentation. Organisations will need to provide rules, frameworks and protocols to guide employees. Such guard rails serve to instil confidence in workers, offering clearly defined spaces that are open for exploration.

Finally, it’s worth thinking about how to prepare for GenAI deployment; the best approach will likely depend on the objectives being pursued. For repeatable, moderate-value use cases deep inside the organisation, we find that small, agile multidisciplinary teams (what we call “pods” in your “AI factory”) can create enormous value incredibly fast. By contrast, mission-critical and differentiating innovation in areas such as identity, digital currencies and embedded finance will require a cross-enterprise and broadly coordinated approach.

Step 3: Build tools for development, integration and operationalisation

Once the strategy, frameworks, training and capabilities are in place, you will need to select several critical tools and platforms on which teams can do the work. Here, it makes sense for FIs to start with sourcing (or building) the foundation models on which those tools and platforms are built. In that selection process, the choice between open-source and closed-source models should be considered, as should such factors as model size, portability, energy and water consumption (there may be reporting requirements in some territories), flexibility, stability, price, security, transparency, traceability and customisability.

On top of every foundation model will sit the rest of the GenAI development stack. At this stage, teams must decide where to host the model and data development environment, and make choices on the development platform and supporting tools, interfaces with other systems, data protection, mirroring (if appropriate) and storage, as well as access and other controls.

Decisions about data access will be especially important, as a significant obstacle for large-scale AI deployment is the availability of high-quality data. Large “lakes” of unstructured data are both an opportunity and a risk for GenAI. On the one hand, models such as LLMs are adept at making connections and finding structure where it is not obvious. On the other hand, unstructured, unreliable and incomplete data introduces noise and creates gaps, which an LLM may try to fill in ways that could introduce errors and risk. A coherent and consistent generative AI data-management strategy is vital for FIs that want to maximise the potential benefits.

Step 4: Embed responsible AI practices throughout the organisation

Given the rapid evolution of the GenAI landscape, rules, tools and guard rails that work today may become obsolete tomorrow. As a result, FIs need a holistic process and clear framework for establishing those guard rails, which includes overseeing and monitoring them, keeping them up to date, and doing it all in a manner consistent with the organisation’s approach to governance, accountability and transparency. This is necessary for any organisation, but for regulated entities like FIs, it’s absolutely critical to not only get it right but be able to demonstrate that it’s right.

Organisations have an abundance of questions to consider, which PwC’s Responsible AI Toolkit can help them navigate, including:

  • When projects are approved, who provides oversight and monitoring?
  • What diligence is required on existing frameworks and protocols to ensure they remain fit for purpose?
  • What additional rules are required for data on which model training might someday be done?
  • When and how should the risk of hidden bias, inaccuracy or private information leaks be assessed?
  • How will we know when external support is needed, and where it should come from?
  • How does all this interact with the overall approach to corporate risk management and governance, and who needs to do what?

Step 5: Select and prioritise use cases

Finally, with so many possible areas in which generative AI can help, such as business and strategic insight or risk management, how do FIs decide where to focus first? There are three vectors to consider: value, complexity and reusability.

Value will be indicated not only by the degree to which any activity or service is accelerated or transformed by GenAI but also by the importance of that activity to customers and the scale at which a GenAI solution can be applied. 

Complexity involves the difficulty of developing or deploying the GenAI solution, as well as the ability to manage it safely in production. Activities that have lower inherent risk (e.g., because they are not customer facing, or because the service isn’t strategically critical or entirely new) are probably best to tackle first, at least for organisations in the early stages of AI maturity.

Reusability will be a function of whether delivering a use case supports the accumulation of assets and experience that teams can apply to subsequent use cases. At this time, few organisations have the kinds of libraries and tools that make such things as cloud-based application development much easier and faster than it was a decade ago. Those that build their resources first will create advantages that can grow exponentially—as solutions deployed today deliver savings to fund and tools to facilitate the deployment and scaling of future innovations.

Conclusion

Anyone who’s trained for a marathon can tell you that coaches preach ‘going slow to go fast.’ That’s not an invitation to relax, but a command to be careful, deliberate and disciplined. It’s a metaphor and mindset that resonates with us as we think about what will be required of FIs as they pursue transformation with GenAI.


Authors

Jim Christodouleas is a thought leader for banking solutions and capability. Based in Melbourne, he is a director with PwC Australia.

Eugénie Krijnsen is PwC’s global financial services advisory leader and the financial sector industry leader in the Netherlands. Based in Amsterdam, she is a partner with PwC Netherlands.

Maria Nazareth is a leading practitioner in generative AI for financial services. Based in Washington, DC, she is a principal with PwC US.

More technology insights

Explore how businesses can unlock the power of technology to capture more value and deliver sustained outcomes.

PwC’s financial services

Discover how we are helping clients shape the future of their FS business.

Related content

Contact us

Jim Christodouleas

Jim Christodouleas

Banking Solutions, Capability and Thought Leadership, Director, PwC Australia

Eugénie Krijnsen

Eugénie Krijnsen

Global Financial Services Advisory Leader, PwC Netherlands

Tel: +31 6 30 60 43 00

Maria Nazareth

Maria Nazareth

Generative AI for Financial Services, Principal, PwC US

Hide
Follow us

Contact us

Tom Archer

Tom Archer

Global Technology Consulting & Alliances Leader, Transformation Platform Co-Leader, PwC US

Hide