AI is being used across the financial services industry, including robotic and intelligent process automation (RPA and IPA). Recent advancements have surprised even the most optimistic, but don’t be distracted by these bright, shiny toys. Technology should solve real business problems, and you’ll face issues such as control and governance when you plug it into your real-world operating environment.
The shift from RPA to IPA. Today’s bots rely on humans to train them, but this will likely change. We expect to see emerging applications of IPA, including machine learning, auto process discovery, and natural language processing. While these advanced tools still need to be trained, they can learn from prior decisions and data patterns. Many of our clients tell us they are exploring IPA, have IPA bots in production, or are looking to scale.
What’s next in AI? The term AI is used to describe anything from automating simple tasks to handling complex thinking assignments. Firms will likely move toward more advanced “augmented intelligence,” with tools that help humans make decisions and learn from the interactions. Firms can also look to AI as a way to customize product design and develop predictive analytics to improve outcomes such as reduced accident rates.
How do you govern a machine? Financial institutions face some tricky questions. What controls should we apply to AI systems that decide and act in nanoseconds? How much authority should AI have? How do we make sure machines uphold their fiduciary duty? What about regulators? What if things don’t go as planned?
Computer, what’s my balance? Consumers are embracing automated assistants such as Google Home, Siri, and Alexa. You’ll need to integrate and manage these new channels, so think about when and where you’ll use them. Don’t forget to think through “off-ramps” that steer customers over to human backups when needed. Finally, give AI systems the opportunity to learn from the outcomes of human interactions.
With excitement comes fear. Financial institution executives are eager to use digital labor, but many human workers already feel threatened by it. To deploy the technology successfully, you should focus on people issues. Share plans with workers so they can understand which jobs will change and how. You’ll need to address these concerns, offer training to help people adapt, and more. Be transparent.
Trust, verify, and explain. For technologies to succeed, they should pass an IT audit. This may not be top of mind in a testing lab, but it will be critical as you move to production. We recommend creating a separate AI audit team—independent from the AI creators and implementers—to focus on controls. And consider transparency. You’ll want AI accountability, so you can explain why your algorithm reached a certain decision.
“Our clients are now thinking about ‘explainable AI,’ how an algorithm can explain the logic of its decision. How you would verify and validate your machine learning model is very different from how you’d typically validate a credit risk model.”
Our teams in asset and wealth management, banking and capital markets, and insurance are helping our clients tackle the biggest issues facing the financial services industry. With professionals across tax, assurance, and advisory practices, we can help you find ways to thrive even in a period of uncertainty. Whether you're preparing for regulatory changes, putting FinTech/InsurTech to work, or rethinking your human capital strategy, we work together with you to resolve complex issues, identify opportunities, and deliver value to your business.
Partner, FS Advisory and Digital Labor/RPA Leader, PwC US
Global Growth Strategy, US Financial Services Practice, PwC US
Leader, Financial Services Institute, PwC US