As PwC’s US AI Assurance Leader, Jennifer Kosar helps organizations build trust and confidence in their AI systems by connecting governance, risk management, and compliance to business strategy. She works with clients to embed Responsible AI (RAI) principles into real-world operations – turning frameworks into measurable, scalable outcomes.
In this Q&A, Jennifer discusses key insights from PwC’s 2025 Responsible AI Survey, including how organizations are operationalizing RAI, evolving accountability models, and preparing for the next wave of innovation as AI agents and autonomous systems reshape governance.
What are the biggest takeaways from the 2025 Responsible AI Survey?
Jennifer Kosar: Our survey shows that executives believe in the potential for RAI to drive value from AI investments and are already seeing business results. Nearly six in ten executives (58%) said RAI initiatives can enhance ROI and organizational efficiency. A majority (55%) noted that RAI strengthens customer experience and drives innovation, while about half (51%) pointed to improved cybersecurity and data protection as additional gains.
We’re also seeing a shift in focus. Organizations aren’t just signing up to RAI principles, they’re asking how to make them real as the volume of AI deployments increases and agent deployments become reality. Half of respondents told us their biggest hurdle is turning those principles into practice by scaling and automating governance, clarifying roles and responsibilities, and aligning leadership around a consistent approach.
What's holding companies back from scaling Responsible AI?
Jennifer Kosar: We found that execution at scale is still the biggest challenge. Half of the executives told us that translating RAI principles into operational processes is their top barrier, and only half have effectively tackled the most basic step – inventorying and tracking their AI use cases. Many are dealing with unclear ownership, limited tools, or fragmented processes that make scaling difficult.
The leading organizations are taking a different approach. Success at scale looks like investing in automation, built-in observability and transparency features, and ongoing monitoring and feedback loops that help governance operate alongside technology. This shift turns RAI from a compliance exercise into a business capability that can deliver consistent, trusted outcomes.
Who within organizations is responsible for leading RAI efforts?
Jennifer Kosar: We’re seeing leadership models evolve fast. While early structures strongly featured committee models, 56% of organizations say their first-line teams — IT, data, and engineering — now lead RAI efforts. That’s a big change, but expected, because it puts responsibility closer to the teams actually building AI systems.
The most effective organizations use a “three lines of defense” model: the first line builds and operates responsibly, the second reviews and governs, and the third provides periodic assessment and audit. That structure allows companies to move quickly while still maintaining confidence and control.
How are AI technologies, especially AI agents, reshaping Responsible AI?
Jennifer Kosar: The next big shift is already underway. The majority of leaders surveyed expect AI agents to reshape governance within the next year – both as a challenge and an opportunity. As systems become more autonomous, organizations are adapting oversight frameworks to include built-in testing, real-time monitoring, and automated controls.
The focus is now on governance that moves at the speed of innovation – frameworks that adjust and learn as AI becomes more capable. That’s how companies will stay both agile and accountable in the next phase of AI adoption.
How do you expect companies’ approach to RAI to evolve over the next year?
Jennifer Kosar: We’re seeing RAI move from a governance framework to a business enabler, with continuous improvement becoming the new standard.
Companies are investing in automation, monitoring, and real-time feedback to make their RAI programs more adaptive and measurable. The goal is no longer just compliance, but instead about building systems that learn, evolve, and strengthen trust as AI becomes more embedded across the enterprise. Leaders in an AI future will be operating with never-before experienced speed; strong AI governance provides necessary confidence and clarity.
At PwC, we help clients build trust and reinvent so they can turn complexity into competitive advantage. We’re a tech-forward, people-empowered network with more than 360,000 people in 136 countries. Across audit and assurance, tax and legal, deals and consulting we help clients build, accelerate and sustain momentum. Find out more at www.pwc.com.
© 2025 PwC. All rights reserved.
PwC's US AI Assurance Leader, Jennifer Kosar
Will Hodges
Emerging Technology, AI, Industries: Health, Energy, Private Equity, Tech, Media/Telecom, Consumer Markets, Asset/Wealth Management, Banking/Capital Markets, Industrial Products, Insurance, Pharma/Life Sciences, Governance Insights Center, PwC US