Good governance for AI: 5 real-world insights for risk professionals

Example pattern for mobile
Example pattern for desktop

Jennifer Kosar

Trust and Transparency Solutions Leader, PwC US

Email

Rohan Sen

Principal, Data Risk and Responsible AI, PwC US

Email

Ilana Golbin

Director and Responsible AI Lead, PwC US

Email

If you’re a chief risk officer (CRO), chief compliance officer (CCO), chief information security officer (CISO), chief legal officer (CLO) or other professional in a risk-focused area, then governing AI is at least on your radar – and probably on your priority list. The reason is the speed with which generative AI (GenAI) is advancing. It’s driving productivity gains today. It’s laying the groundwork for new business models tomorrow. Yet for GenAI (or any AI) to deliver value, it should be well governed.  

What does good AI governance look like? How do you achieve it, as part of a Responsible AI approach, and keep it improving as AI keeps evolving? And how can AI governance not only manage risks, but also help AI deliver value more quickly?

We’re risk professionals and AI practitioners ourselves. These are the kind of questions that we answer every day, both for our own firm’s GenAI implementation and as we help clients on their AI journeys. Based on this experience, we offer five insights that can help achieve good AI governance.

Insight #1: Good AI governance delivers value even before the use case starts

Many proposed use cases for GenAI, we’ve found, have something in common: They're not, in fact, use cases for GenAI. A different AI tool or other AI technology may be better suited to get the job done. Mapping the right tools and data to the right use case is something that good AI governance can do. 

Effective "governance at intake” requires methods and tools to assess use cases for feasibility, complexity, suitability and risk. These methods and tools should be aligned across business functions and be applied by cross-functional teams that have technology, business and risk experience.

Insight #2: GenAI makes new demands of governance

Unlike traditional AI where a model is typically built for a very specific purpose, GenAI-based solutions are more likely to perform multiple use cases, with different risk profiles, in different functions. And it’s not just a small group of tech specialists who use GenAI. Additionally, GenAI is increasingly embedded in third-party services and everyday enterprise applications. 

These differences often mean governance must expand its speed, scale and reach. This enhanced governance should cover procurement, third-party risk management, security, privacy, data, compliance and more. Enterprise-wide governance also benefits from a common, enterprise-wide view of risks that a risk taxonomy can provide. 

Insight #3: An AI-focused risk taxonomy is fundamental

A comprehensive, standardized, AI-focused risk taxonomy can help make governance decisions consistent and repeatable. It can help your people prioritize risk, escalate incidents, remediate issues, communicate with stakeholders and more. The AI risk taxonomy we use covers six areas: 

  • AI models: training, development, and performance 
  • Data: collection, processing, storage, management and use
  • System and infrastructure: implementation and operation of AI within the broader software and tech environment, including cybersecurity risks
  • Users: unintentional misuse, malicious actions and cyberattacks 
  • Legal and compliance: laws, rules, and regulations, including privacy
  • Process impact: how integrating AI may impact existing workflows

Insight #4: Good AI governance advances AI strategy

There’s a sweet spot for AI governance — where it neither holds back the business nor leaves you too vulnerable to risks. In this sweet spot, governance helps prevent delays in AI initiatives, since it addresses problems before they start. With good governance, you won’t have to halt and reverse-engineer projects later — including when new AI regulations emerge. You’ll already be prepared. And by identifying areas where AI risks are most manageable, governance can help guide strategy.

To achieve AI governance that advances AI strategy, give governance a seat at the table from the very start. Together, AI specialists, business leads and risk professionals can align business goals and risk management needs. They can also work together to build trust into AI initiatives from Day One. A trusted foundation of good AI governance will help you keep innovating in line with your chosen risk profile, even as technology evolves and new opportunities emerge.  

Insight #5: Be tech powered – but human-led

There are powerful technology tools that can help with AI governance. But these tools, like AI itself, need well-trained, engaged people to manage them. Your entire AI governance team — which will include risk, AI and business specialists — may need coaching to understand AI and AI governance tools, and to collaborate effectively. Clear roles and responsibilities can help speed up prioritization, approvals, and remediation where necessary. As AI spreads, the broader workforce may need change management and upskilling. 

Also consider updating codes of conduct and acceptable use policies and create channels to help people report new risks. And always remember that people should be in the lead when making the big decisions on governing and building AI so that it can both deliver business value and grow stakeholder trust.

Ana Mohapatra contributed to this article.

Generative AI

Generative AI

Lead with trust to drive sustained outcomes and transform the future of your business.

Learn more

Get the complete GenAI risk playbook

Get the complete GenAI risk playbook

What security, privacy, internal audit, legal and compliance leaders need to know now.

Learn more

Next and previous component will go here

Follow us