The signals are clear. Responsible AI is an enabler of innovation and differentiated customer experiences. Nearly 60% of executives say Responsible AI boosts ROI and efficiency, and 55% report improvements in customer experience and innovation.
The focus is now changing to operationalization—turning Responsible AI principles into scalable, repeatable processes—with half of our respondents citing this as their biggest hurdle.
As AI capabilities continue to evolve from generative to agentic systems and AI footprints increase within organizations, that challenge is becoming more urgent. The most advanced organizations are meeting it head-on through automation, tech-enablement, and feedback loops that keep governance aligned with rapid technological change.
Our findings highlight six insights showing how Responsible AI is evolving from foundational governance to innovation and scale.
Respondents were clear—the primary benefit cited for Responsible AI practices is value creation. While sometimes positioned as the mechanisms to managing regulatory, security, and compliance risks for AI initiatives, the executives we surveyed placed these benefits only third on the list.
“Organizations investing in Responsible AI are realizing measurable returns—in innovation, performance, and trust.”
This reflects what we’re seeing in the market. Responsible AI is fast becoming an engine for sustained business performance. Companies that integrate responsible practices into their AI strategies are building systems that scale responsibly, deliver measurable impact, and earn stakeholder trust.
Businesses are making steady progress—evolving their programs in response to the growing need for effective, streamlined, proportional governance.
Our survey shows a range of maturity. About six in ten respondents (61%) say their organizations are either at the strategic (28%) or embedded (33%) stage, where Responsible AI is actively integrated into core operations and decision-making. Roughly one in five (21%) report being in the training stage, focused on developing employee training, governance structures, and practical guidance. The remaining 18% say they’re still in the early stages, working to build foundational policies and frameworks.
Together, these stages show that Responsible AI is moving from aspiration to execution—but at very different speeds across the market.
Companies report stronger governance, clearer priorities, and greater accountability as their AI programs mature overall. Those at the strategic stage are roughly 1.5 to 2 times more likely to describe their Responsible AI programs capabilities—such as development standards and inventorying of AI—as effective compared with those still in the training stage.
Seventy-eight percent of respondents in the strategic stage say they’re very effective at defining and communicating Responsible AI priorities, compared with 35 percent in the training stage.
The takeaway: Progress is real, but consistency at scale remains out of reach for most. More mature programs appear to appreciate the value their Responsible AI components offer in the form of discipline, measurability, and sustained business performance, not just risk awareness.
The foundations of policies and governance frameworks are table stakes. The challenge now is executing these programs at scale.
For some, the obstacles are structural—limited tools, unclear ownership, and uneven leadership alignment.
For others, the focus has shifted to consistency—scaling Responsible AI across business units through stronger governance, clearer feedback loops, and smarter technology.
Advanced-stage organizations are addressing this by investing in the tools and processes that make Responsible AI measurable and repeatable. They’re building the infrastructure needed to operationalize at scale rather than relying on ad hoc processes. With governance enabled by technology, from AI governance-specific tooling to automation and optimized AI workflows, they’re evolving tooling and processes to address needs that change as quickly as the jump from traditional AI to generative AI and now AI agents.
The goal isn’t just implementation. It’s creating systems that can adapt and scale as AI adoption accelerates across the organization.
Alignment around ownership is key. Organizations are moving from shared committees to clear lines of accountability, embedding governance directly into how AI systems are designed and deployed. While committees are essential for early alignment on governance scope, risk posture, and approval workflows, they can also be a bottleneck if all AI systems require their review. Maturing organizations handle this issue by agreeing upon the split of responsibilities to match the velocity and scale of their AI strategies.
Fifty-six percent of the executives say their first-line teams—IT, engineering, data, and AI—now lead Responsible AI efforts. That shift puts responsibility closer to the teams building AI and sees that governance happens where decisions are made, refocusing Responsible AI from a compliance conversation to that of quality enablement.
Today’s tech leaders, data specialists, and risk and compliance teams are working together to align business goals with responsible outcomes. This structure reflects PwC’s three lines of defense model—one built for speed and trust.
Responsible AI is a team sport. Clear roles and tight hand-offs are now essential to scale safely and confidently as AI adoption accelerates.
The pace of change in AI is not slowing. AI agents are the latest AI technology to push organizations to redefine how and what they govern. Companies are already adapting their oversight frameworks to consider fully autonomous systems.
They’re applying lessons learned from the generative AI wave, embedding testing, data access controls, and telemetry directly into design and deployment. Instead of simply reacting to new risks, they’re building adaptive, resilient governance that’s designed to scale with AI’s growing autonomy.
As AI agents become more capable, governance should evolve in real time—shifting from static controls to continuous oversight that keeps pace with innovation.
Responsible AI is shifting from its early mandate of governance to growth enablement, improving and adapting as quickly as the technology it oversees. Emerging practices focus on enabling quality and consistency paired with tooling and skillsets to make this achievable.
Leaders are investing in automation, testing, observability, and red teaming to monitor performance in real time, reduce risk, and accelerate governance.
Roughly two-thirds (69%) of strategic-stage organizations report having evaluation and testing capabilities in place or planned to govern AI agent activity—a critical foundation as AI systems become more autonomous and widespread.
These technical capabilities help leaders spot issues earlier, adjust controls faster, and build greater confidence in outcomes. Investment is now shifting toward technology enablement and innovation capacity, not just compliance and risk management.
The next phase of Responsible AI maturity embraces a continuous innovation mindset—using technology to strengthen oversight while driving progress and performance.
“Governance for scale means constant feedback, testing, and evolution.”
Responsible AI is essential to sustained business performance from AI investments. Companies need to build governance that moves as fast as the technology. Here’s where to focus next.
From September 26 to October 2, 2025, PwC surveyed 310 US business leaders with director or higher roles (including VP and C-suite titles), across a range of company sizes.
Embrace AI-driven transformation while managing the risk, from strategy through execution.