Responsible AI (RAI) can add value in a number of ways beyond protecting companies and customers from harmful errors, bias, and other risks that can cause financial, physical, and reputational damage. RAI can, for example, accelerate AI application development—identifying the risks that matter helps streamline processes and requirements. And it can increase employee adoption and consumer trust by enabling more effective testing and quality controls. But is there a way to quantify these benefits?
To help determine whether investing in a responsible approach to AI adds measurable value, we modelled two groups of hypothetical companies: those spending just enough to meet AI compliance requirements for their specific industry and those spending an additional 10% of their AI budget on more complete Responsible AI programmes. Then we simulated the companies’ five-year performance under a variety of scenarios.
The second group saw a range of positive outcomes:
Here’s how company leaders can put the right controls around AI:
Embed Responsible AI into the overall AI strategy. If Responsible AI isn’t shaping which initiatives you pursue—and how—you may be overinvesting in risky efforts outside your organisation’s comfort level or underinvesting in desired high-value, low-risk opportunities. For example, an airline brought its risk management team together with senior business and tech leaders and determined that it would only use GenAI on internal productivity tools, explicitly excluding any use cases that could affect passenger or employee safety. This strategy helped direct investments towards areas of value that aligned with the airline’s risk profile.
Develop repeatable processes for building and launching AI applications and products responsibly. Consistent Responsible AI practices should be part of every step in AI development and deployment—from assessing potential use cases for their value and risk to closely monitoring the performance of live applications. If every use case requires starting from scratch—including figuring out how to assess risk, implement fit-for-purpose controls, run tests, and handle data—you’re slowing progress. You’re also sapping value by increasing the chance of costly mistakes and low-quality work that needs to be redone.
Set clear executive ownership. RAI programmes need a senior leader who can bring together a cross-functional executive team, including people from key areas such as risk management, IT, security, and, importantly, relevant business functions. If your RAI efforts have no clear leader, it’s time to assign one.
Use technology to embed Responsible AI into everyday workflows. Solutions are available to help make Responsible AI practical and scalable throughout the organisation. Consider investing in technology that can augment functions such as running risk assessments; identifying legal, reputational, and other risks; and assessing regulatory compliance and effective AI governance.
Be transparent. Actively communicate how AI is being governed—to employees, customers, regulators, and investors. Implement dashboards, reporting mechanisms, or quarterly briefings to convey your organisation’s governance posture and progress in closing any gaps.