Quantifying the value of Responsible AI

Professionals discussing in car factory
  • Insight
  • 12 minute read
  • August 07, 2025

The race is on to realise AI’s potential to improve financial performance. PwC research shows the biggest gains accrue to companies that invest in adopting AI responsibly.

AI’s extraordinary potential to help companies streamline activities, enhance customer offerings, make workers more effective and speed innovation has executives hustling to deploy intelligent applications and agentic systems. In fact, leaders using AI are already gaining confidence in what it can deliver. A PwC survey published earlier this year found that CEOs of businesses that have adopted generative AI are much more likely than others to say the technology will improve the quality of their products and services.

Upbeat as they may be, many leaders also recognise that relying on AI and agents to perform tasks and make decisions can create risks for their business. And the extent of the risk is still largely unknown: AI is new enough that little data exists on how frequently adverse AI incidents occur or how much they cost companies. That information gap can make it hard for executives to decide if they should invest AI resources in governance and guard rails that enable the technology’s responsible development and use. In fact, executives in our 2024 US Responsible AI Survey cited the inability to quantify the impacts of such measures as the top reason for forgoing them.

Responsible AI (RAI) can add value in a number of ways beyond protecting companies and customers from harmful errors, bias and other risks that can cause financial, physical and reputational damage. It can, for example, unlock AI value faster by accelerating AI application development: identifying the risks that matter helps streamline processes and requirements. And it can increase employee adoption and consumer trust by enabling more effective testing and quality controls, which, in turn, enable AI to provide more reliable results. But is there a way to quantify these benefits?

To help determine whether investing in a responsible approach to AI adds measurable value, we built a system dynamics model to compare the financial performance of companies that have AI safeguards in place with companies that don’t. (See ‘About the research,’ below, to learn about our methodology.) By simulating tens of thousands of scenarios, we found that organisations with a robust RAI programme reduced the frequency of adverse AI-related incidents by as much as half. When simulated incidents did occur, the companies engaging in Responsible AI recovered more of their financial value more rapidly. And overall, they achieved valuations and revenues that were as much as 4% higher than companies investing in compliance only. These modelled results may be directional rather than exact, but they are clear. When companies invest in RAI practices, even if it means putting slightly less of their AI budget into technology, talent and tools, they come out ahead.

A robust Responsible AI programme is much more than a collection of policies posted on an internal website or tick boxes for industry compliance. It includes a set of ongoing practices that enable organisations to tap AI’s transformative value at speed while addressing risks in a consistent, transparent and accountable manner.

Each AI application presents risks that can stem from any of six areas: data, underlying models, infrastructure, non-compliance with applicable laws, process integration issues, and intentional or accidental misuse of the AI solution. Addressing these requires identifying and tiering the risks so you can activate the right people, assessment processes, governance, training, controls, testing and monitoring at the level appropriate for each use case. A heavy blanketed approach can unnecessarily slow development of low-risk use cases, while a universally light touch can leave a firm open to significant harm. Tailoring policies and procedures to each approach, as needed, helps strike the right balance between accelerating innovation and moving more cautiously to mitigate significant risks.

At a high level, the components of a Responsible AI risk management programme fall into three categories:

  • Foundational capabilities. Responsible AI principles, policies and procedures across risk domains (e.g., cyber, privacy, model and legal risk), an inventory of the organisation’s AI use cases, an AI risk taxonomy, and a risk intake and tiering process.
  • Operating model and governance. Clear roles and responsibilities, a governance committee and procedures for escalation, an AI risk and control matrix, and company-wide training and communications.
  • Application life cycle. AI development and deployment standards, testing and monitoring protocols, and risk mitigation and tracking mechanisms.

Responsible AI generates a financial premium

Our latest CEO and AI Agent surveys show businesses steadily increasing AI adoption and seeing benefits from its usage. Trust in AI, however, is weak. Only a third of CEOs around the world say they have a high degree of trust embedding AI into key processes. And according to our most recent Voice of the Consumer research, a mere half of consumers trust the technology for low-stakes activities like providing product recommendations. Even fewer feel confident using AI for higher-stakes purposes like getting investment advice. In principle, though, companies that manage AI risks—everything from inaccurate chatbot outputs to autonomous driving fatalities—help build the trust required to boost employee adoption, protect organisations and allay consumer fears.

Our modelling results reflect this trust-building dynamic. We began the simulation by defining two sets of hypothetical companies: those spending just enough to meet compliance requirements for their specific industry and those spending an additional 10% of their AI budget on more complete Responsible AI programmes (a reasonable approximation indicated by several sources and corroborated by our own experience). Then we simulated the companies’ five-year performance under a variety of scenarios defined by 22 variables (for more about this approach, see the ‘About the research’ section, below). The outcome: companies investing in sound RAI programmes achieve levels of trust from both the public and their employees that are up to 7% higher than their peers.

What’s more, our simulation implies that the responsible use of AI creates a ‘trust halo,’ enhancing a company’s value and revenues even in scenarios in which no AI incidents occur. The simulation shows that companies investing in a robust RAI programme see valuations up to 4% higher and revenues up to 3.5% higher than companies with compliance-only investment. Companies in more highly regulated sectors and geographies saw smaller gains, perhaps because their compliance requirements call for a higher level of baseline investment in AI safeguards. Still, these results align with other studies demonstrating a strong correlation between consumer trust in an organisation and performance. In our own research, for example, we found that trust accounted for an unexpectedly high 31% of the variance in performance among companies.

Responsible AI adds resilience

The benefits of Responsible AI programmes go further still in our modelled results. These initiatives protect companies against serious AI incidents, and they promote more rapid and successful recovery when incidents do take place.

The protection afforded by Responsible AI is significant: as much as a 50% reduction in the chance of an adverse AI incident, which we estimated at a baseline of 2% annually based on information from the OECD and the Responsible AI Collaborative (RAIC) AI Incident Database (AIID) project. (According to AIID Editor’s Guide definitions, an adverse AI incident is an event that causes harm to a person, property or the environment in which an AI system is implicated. Examples include AI-driven bias, significant data leaks, fatal crashes involving AI-powered autonomous vehicles or market flash crashes.) Even in a highly regulated sector with extensive compliance mandates, like US financial services, companies that create stronger RAI programmes than required slash their risk of an incident by a third.

Incremental increases in spending on RAI also produce improvements. For example, even a 3 percentage point increase in RAI spending decreases the likelihood of an incident by about 18%. 

 

Though these improvements in protection may appear marginal, their value is magnified by the rapid, profound impact of AI incidents, which are increasing along with AI usage. The 233 significant incidents reported to AIID in 2024 might seem low, but the tally reflects a 56% increase from the prior year. Moreover, it captures only those incidents that people submit to the database—factoring in the number of unreported worldwide incidents last year would likely increase the number by orders of magnitude. Whether a company has a comprehensive RAI programme or not, our simulation suggests that public trust in the company drops precipitously, by at least 20%, immediately after the event, and recovers very little within the modelled period for companies with a full RAI programme, and even less among those without one. 

 

The blow to company value can be even more substantial, with the potential to reach as high as 50% within the first two weeks after the most severe events. Consider this liability against the value declines we’ve seen after mishaps in cybersecurity, an area that typically accounts for a much higher amount of company spending—10% of IT budgets on average. Some cybersecurity incidents in the past year knocked company stock value down between 15% and 18% immediately post-incident. Cyber resilience is fittingly seen as a significant competitive advantage due to its ability to improve consumer trust. The simulated losses in our model suggest Responsible AI should be regarded in the same way.

Shortly after an AI incident, the fortunes of the RAI investors and those focused only on compliance diverge. In our simulation, companies with a comprehensive Responsible AI programme recover faster and more strongly: 90% of their pre-incident value returns in seven weeks, and 95% comes back in 13 months. Those without substantial RAI take more than three times longer (25 weeks) to recover 90% of pre-incident value—and they never reach 95% within the modelled period.

The simulation suggests higher trust among employees could account for this difference. Though organisations with strong RAI programmes see only slightly improved levels of public trust post-incident, the model shows that employees’ trust in an RAI-adopting company recovers twice as fast as it does in companies with a compliance-only policy. And their workers’ use of AI reaches pre-incident levels about 30% faster. The simulation also suggests that RAI companies find it easier to retain and attract quality AI talent sooner after a mishap than do compliance-only companies. In fact, at companies with solid RAI programmes, employee trust and personnel quality eventually exceed pre-incident levels by about 5%.

Deciding where to invest in Responsible AI

Asking the following questions can help you determine where to invest your time and resources to build a Responsible AI programme that creates value while safeguarding your organisation and customers.

Is Responsible AI embedded in your AI strategy? 

Responsible AI practices are integral to developing and executing an AI strategy that can achieve your organisation’s goals, whether they be revenue generation, cost reduction or any of the myriad other possibilities. If Responsible AI isn’t shaping which initiatives you pursue—and how—you may be overinvesting in risky efforts outside your organisation’s comfort level or underinvesting in desired high-value, low-risk ones. One major airline, for example, brought the risk management team together with senior business and tech leaders to shape its AI road map. Together, they chose to focus their generative AI capabilities only on internal productivity tools, explicitly excluding any use cases that could affect passenger or employee safety. This early filtering helped them direct investment towards areas of value while accelerating development through the design of fit-for-purpose guard rails and governance mechanisms that aligned to their low-risk posture. When governance is informed by AI strategy, all aspects of risk management, including legal and compliance, can coordinate to achieve business objectives with appropriate levels of control. If your organisation doesn’t explicitly consider and integrate responsible practices throughout its AI strategy and execution, that’s your first investment gap.

Do your teams have repeatable processes for building and launching AI applications and products responsibly? 

Repeatable Responsible AI practices should be part of every step in AI development and deployment—from assessing potential use cases for their value and risk to closely monitoring the performance of live applications. If every use case requires starting from scratch—including figuring out how to assess risk, implement fit-for-purpose controls, run tests, handle data, etc.—you’re slowing progress. You’re also sapping value by increasing the chance of costly mistakes and low-quality work that needs to be redone. It’s a sign that you should consider investing in the development of assets such as risk-tiering frameworks, standardised application development guidance and documentation templates. Just as important: AI governance shouldn’t be a separate process layered on top of product development. One financial services organisation we know found that confusion and delays arose from AI being governed both by its standard product life cycle and a separate set of AI oversight procedures. By realigning all AI governance requirements to the product life cycle and providing teams with clear examples and templates, developers found it easier to engage with the right governance processes and teams at the right time. This adjustment accelerated development and ensured governance processes were followed and replicable. 

Is there clear executive ownership of Responsible AI?

If AI oversight lives in a silo—whether it’s within tech, legal or compliance—you’ll struggle to embed governance across the organisation. RAI programmes need a senior leader who can bring together a cross-functional executive team that includes people from key areas such as risk management, IT, security and, importantly, relevant business functions; in the end, the business holds the risk as well as the responsibility for delivering results from AI initiatives. We’ve seen effective RAI programmes headed by tech leaders like the CIO or other functional leaders such as the COO, CISO or CRO. If your RAI efforts have no clear leader, it’s time to assign one.

Are you using technology to embed Responsible AI into everyday workflows? 

People sit at the heart of Responsible AI, but technology can help make it practical and scalable throughout the organisation. If your RAI processes are manual, slow or inconsistently applied, consider investing in technology that can augment functions like running risk assessments; identifying legal, reputational and other risks; and assessing regulatory compliance and effective AI governance. We have, for example, seen engineering teams use generative AI to create the first draft of AI model documentation, a critical element of Responsible AI, because documentation provides model transparency (and replicability). It also captures information about how the model was developed, the data it uses, how it works, how it should be used, its limitations and more. Once humans finalise and verify the documentation, teams can use generative AI to draft derivative documents tailored to various stakeholders—for example, for risk managers who need to perform model risk assessments or employees who need to understand when and how to use the model.

Do you have a plan for transparency?

If you’re not actively communicating how AI is being governed—to employees, customers, regulators and investors—you’re missing out on the benefits of building trust and risk losing it if even minor issues arise. Invest in dashboards, reporting mechanisms or quarterly briefings to convey your organisation’s governance posture and progress on any gaps.

About the research

We simulated the impacts of RAI on a relative basis. In other words, we examined how a company investing sufficiently in RAI performs compared to one that invests only the bare minimum necessary to meet its industry’s compliance requirements.

Our systems dynamics model considered 22 variables, including AI adoption levels, AI and RAI budget sizes, AI market size, the regulatory environment and RAI effectiveness. Though it’s impossible for any model to weigh every possible factor that might influence RAI and its impacts, we believe our model objectively advances the understanding of the measurable impact RAI can make with sufficient nuance.

Data for some variables, such as AI adoption rates, was available when we began work on the model. However, other factors required fact-based assumptions from our experts. As an example, we estimated the likelihood of a company experiencing an adverse AI incident. Based on data from the OECD, Stanford AI Index and other sources, about 78% of midsized to large enterprises worldwide use AI, which equates to about 1 million organisations. Given that 233 incidents were reported to the AI Incident Database in 2024, the percentage of firms with reported adverse AI incidents is 0.02%. If we assume that only one out of every ten publicised incidents are reported to the database, and that unpublicised incidents occur at a rate of ten times more than publicised incidents, it suggests a 2% annual rate.

The authors would like to thank Robert N. Bernard for his contributions to this article and his work on the model that informs it.

About the authors

Ilana Golbin-Blumenfeld
Ilana Golbin-Blumenfeld

AI Assurance, Principal, PwC United States

Ilana Golbin-Blumenfeld is a leading practitioner in Responsible AI practices. She is a principal with PwC US.
David De Lallo
David De Lallo

Administrative, PwC United States

David De Lallo is a contributing editor for PwC.

Responsible AI

Designing, building and operating AI you can trust.

PwC’s 2024 US Responsible AI Survey

AI is becoming intrinsic to business strategy and operations. Here’s how to speed up initiatives, manage risk and generate value.