Companies that invest in strong governance over the technology show higher valuations and revenue, in addition to safer operations.

How Responsible AI can create measurable value

  • 3 minute read
  • November 18, 2025

The Leadership Agenda

Explore now

Responsible AI (RAI) can add value in a number of ways beyond protecting companies and customers from harmful errors, bias, and other risks that can cause financial, physical, and reputational damage. RAI can, for example, accelerate AI application development—identifying the risks that matter helps streamline processes and requirements. And it can increase employee adoption and consumer trust by enabling more effective testing and quality controls. But is there a way to quantify these benefits?

To help determine whether investing in a responsible approach to AI adds measurable value, we modelled two groups of hypothetical companies: those spending just enough to meet AI compliance requirements for their specific industry and those spending an additional 10% of their AI budget on more complete Responsible AI programmes. Then we simulated the companies’ five-year performance under a variety of scenarios.

The second group saw a range of positive outcomes:

  • The frequency of adverse AI-related incidents—such as AI-driven bias or significant data leaks—decreased by up to half. Even in highly regulated sectors with extensive compliance mandates such as US financial services, companies that invested in Responsible AI reduced their risk of an incident by a third. Moreover, those benefits compounded over time.
  • When simulated incidents did occur, the companies engaging in Responsible AI recovered more of their financial value more rapidly, returning to 90% of their pre-incident value in seven weeks, and 95% in 13 months.
  • The valuations of companies prioritising RAI were as much as 4% higher than those of companies investing in compliance only, and revenues were up to 3.5% higher. Why? The simulation implies that the responsible use of AI creates a “trust halo,” increasing a company’s value and revenues even in scenarios in which no AI incidents occur.

Here’s how company leaders can put the right controls around AI:

Embed Responsible AI into the overall AI strategy. If Responsible AI isn’t shaping which initiatives you pursue—and how—you may be overinvesting in risky efforts outside your organisation’s comfort level or underinvesting in desired high-value, low-risk opportunities. For example, an airline brought its risk management team together with senior business and tech leaders and determined that it would only use GenAI on internal productivity tools, explicitly excluding any use cases that could affect passenger or employee safety. This strategy helped direct investments towards areas of value that aligned with the airline’s risk profile.

Develop repeatable processes for building and launching AI applications and products responsibly. Consistent Responsible AI practices should be part of every step in AI development and deployment—from assessing potential use cases for their value and risk to closely monitoring the performance of live applications. If every use case requires starting from scratch—including figuring out how to assess risk, implement fit-for-purpose controls, run tests, and handle data—you’re slowing progress. You’re also sapping value by increasing the chance of costly mistakes and low-quality work that needs to be redone.

Set clear executive ownership. RAI programmes need a senior leader who can bring together a cross-functional executive team, including people from key areas such as risk management, IT, security, and, importantly, relevant business functions. If your RAI efforts have no clear leader, it’s time to assign one.

Use technology to embed Responsible AI into everyday workflows. Solutions are available to help make Responsible AI practical and scalable throughout the organisation. Consider investing in technology that can augment functions such as running risk assessments; identifying legal, reputational, and other risks; and assessing regulatory compliance and effective AI governance.

Be transparent. Actively communicate how AI is being governed—to employees, customers, regulators, and investors. Implement dashboards, reporting mechanisms, or quarterly briefings to convey your organisation’s governance posture and progress in closing any gaps.

Explore the full findings of PwC’s “Quantifying the value of Responsible AI” report

Follow us

Contact us

Ilana Golbin-Blumenfeld

Ilana Golbin-Blumenfeld

AI Assurance, Principal, PwC US

Hide