Capitalizing on artificial intelligence means being proactive about risk and embracing responsible AI. This is one of five PwC priorities for making the most of AI in the coming year.
You may have seen ominous headlines about AI’s dark side, but business leaders seem to be unfazed: 85% of those surveyed (executives actively working with AI) said their companies are taking sufficient measures to protect against AI’s risks. However, this finding suggests an underappreciation for the true challenges and level of effort needed to responsibly capitalize on AI. And when it comes to backing up those words with actions, such as implementing controls around decisions or data, there’s still a long way to go.
Only about one-third of respondents have fully tackled risks related to data, AI models, outputs, and reporting. Considering the growing public concern over issues such as bias in algorithms or facial recognition tools, and AI-powered “deepfakes,” that’s not good enough. And with AI increasingly present (and often invisible) in everyday business processes and in vendor-supplied solutions, rigorous AI risk management is increasingly critical.
While you can’t eliminate these risks, you can mitigate them through the five dimensions of responsible AI. What does that look like? Integrating processes, tools, and controls needed to address critical areas like bias, explainability, cybersecurity, and ethics, among others. And responsibility applies to your workforce, too: As AI takes tedious tasks off your employees’ shoulders, you should invest in upskilling and cross-skilling your people, so they learn to welcome AI as an opportunity to perform higher-value work.
In our survey, the leading area that executives are working on is making AI interpretable and explainable: 50% are taking steps around explainability for those building and operating the system, while 49% are focused on explainability for those affected by the system. We also see companies beginning to realize that addressing larger issues around data and tech ethics requires collaboration with customers, industry peers, regulators, and tech companies.
Encouragingly, most survey respondents have company-wide AI governance, whether through a new and specialized AI center of excellence (18%), an existing data and analytics group (18%), an organization-wide AI leader (16%), outside providers (16%), or an existing automation group (15%). Yet, 16% are delegating AI strategy and governance to individual business units and functions.
Unless strict precautions are taken, that approach threatens to limit AI’s potential and make it harder to manage and secure. To take just one example: Without careful governance of AI procurement across the enterprise, an unscrupulous vendor could steal valuable intellectual property.
Only about one-third of respondents have fully tackled risks related to data, AI models, outputs, and reporting.
Take a multidisciplinary approach. Whichever governance structure your company chooses, its team must include management, procurement, compliance, technology and data experts, as well as process owners from different functions. It should also cover the entire enterprise.
Build up your AI risk confidence. Ensure — with the help of risk and compliance functions — that you have the right AI standards, controls, tests, and monitoring for all risk aspects of AI. You’ll also need a budget for AI assurance, just as you likely do for cybersecurity or cloud security.
Act to maintain performance. Good governance and risk management don’t have to mean slow going. The right level of explainability, for example, will depend on each AI model’s level of risk, allowing quicker action in some. It’s also possible to automate many governance processes, such as capturing data in model sheets and automatically determining risk ratings for possible human review.