In PwC’s Global CEO Survey, only about half of companies say they’ve formalised their approach to AI risk. That could hold them back from wider adoption.

Are companies thinking systematically about AI risk?

  • 5 minute read
  • April 21, 2026

PwC’s 29th Global CEO Survey

Explore now

The early returns from AI are still relatively modest, and risk appetite may be a factor. In PwC’s 29th Global CEO Survey, only about one-fourth of CEOs say their companies have seen lower costs from AI investments, and more than half (56%) say they’ve realised neither revenue nor cost benefits. One possible explanation? The risk appetite of leaders. The survey found that only 51% of companies have formalised their approach to AI risk.

For the rest, the lack of structured risk management for AI could be preventing companies from investing broadly in the technology and applying it widely across functions and processes in the enterprise. Instead, they may be limiting themselves to the kind of isolated, tactical AI projects that often don’t generate a meaningful impact for the organisation.

A key finding from this year’s survey is that tangible returns from AI require enterprise-scale deployment consistent with company business strategy. This demands strong AI foundations, including formalised Responsible AI and risk processes.

A Responsible AI governance framework can provide the guardrails to support AI use, which gives leaders the confidence to apply it in more areas of the business (other PwC research has found that Responsible AI correlates to stronger financial performance). Moreover, AI can itself transform risk and governance activities, making it a potential solution to the issues it creates for organisations. Specifically, AI can lower the cost of controls and compliance and increase speed to market to capture new business opportunities faster, ultimately leading to a better ROI on AI initiatives.

Here’s how companies can add structure to how they manage AI risk:

Automate at scale
Automate testing, monitoring, and transparency across the AI life cycle. Use real-time data and feedback loops to adjust controls, mitigate risks, and strengthen confidence in outcomes.

Set clear accountability
Review the effectiveness of the ‘three lines’ model to align builders, reviewers, and assurers. Clear ownership enables faster, coordinated decision-making between technical and risk teams.

Adapt governance for AI agents
Build controls and review cycles directly into agentic systems. Integrate oversight early so you can stay ahead of innovation.

Improve over time
Treat Responsible AI as a living system, not a static framework. Reassess regularly as technologies and risks evolve to keep your governance fit for purpose.

Explore the full findings of PwC’s 29th Global CEO Survey

Follow us

Contact us

Patrice Morot

Patrice Morot

Global Risk Services Leader, Global Client Partner, PwC France

Shaun Willcocks

Shaun Willcocks

Partner, Global Risk Markets Leader, Global Internal Audit Leader, PwC Japan

Tel: +81 (0)90 6478 6991

Leigh Bates

Leigh Bates

Global AI Trust Leader, Partner, PwC United Kingdom

Hide