te pattern chevrons rose mobile
te pattern chevrons rose

Summary

  • AI is outpacing industry standards, making it essential for leaders to rethink how they use frameworks to manage risk and drive responsible innovation.
  • Flexible, resilient AI programs are important to keep pace with evolving technologies, emerging regulations and shifting stakeholder expectations.
  • Strategic alignment is key — governance efforts should support business goals while anticipating rapid change in AI capabilities and regulatory landscapes.

This is the third in a series of articles focused on how Responsible AI is enhancing risk functions to deliver value and AI innovation.

The rapid pace of evolution in AI and the extensive proliferation of use cases has far outstripped the ability for industry standards bodies to keep up.

Given that most industry standards for AI are voluntary (as opposed to regulations that carry legal weight), companies and risk leaders should weigh the cost and time involved in each certification — now more than ever.

Standards can still be valuable anchor points for companies that want external validation or confirmation that they’re on the right path. They can be vital tools for providing structure to AI growth trajectories, preparing for compliance with future regulations, connecting with industry peers and helping to shape the next generation of Responsible AI practices. But organizations need to engage with industry standards more strategically than they did just a few years ago, before the rapid proliferation of AI technologies.

Responsible AI and industry standards: what you need to know

Share
te pattern chevrons rose mobile
te pattern chevrons rose

Summary

  • AI is outpacing industry standards, making it essential for leaders to rethink how they use frameworks to manage risk and drive responsible innovation.
  • Flexible, resilient AI programs are important to keep pace with evolving technologies, emerging regulations and shifting stakeholder expectations.
  • Strategic alignment is key — governance efforts should support business goals while anticipating rapid change in AI capabilities and regulatory landscapes.

5 minute read

June 26, 2025

This is the third in a series of articles focused on how Responsible AI is enhancing risk functions to deliver value and AI innovation.

The rapid pace of evolution in AI and the extensive proliferation of use cases has far outstripped the ability for industry standards bodies to keep up.

Given that most industry standards for AI are voluntary (as opposed to regulations that carry legal weight), companies and risk leaders should weigh the cost and time involved in each certification — now more than ever.

Standards can still be valuable anchor points for companies that want external validation or confirmation that they’re on the right path. They can be vital tools for providing structure to AI growth trajectories, preparing for compliance with future regulations, connecting with industry peers and helping to shape the next generation of Responsible AI practices. But organizations need to engage with industry standards more strategically than they did just a few years ago, before the rapid proliferation of AI technologies.

How AI is rapidly changing the status quo in industry standards

For years, industry standards have served as clear targets for organizations — a way to demonstrate quality, mitigate risk and establish trust with stakeholders. Meeting these standards, while perceived laborious, was a relatively straightforward process: Align operations to the relevant frameworks, achieve certification and use the certification to signal reliability to customers and regulators alike.

For the last few years, standards bodies have continued to drive frameworks for AI with this model in mind. However, the pace of change in AI has disrupted the utility of this approach. Many existing or in-progress AI standards were designed to address narrow, specific applications of AI and were often aspirational in the structures they required.

These frameworks did not anticipate today's generative AI or agentic AI systems, which are far more complex, dynamic and versatile. In addition, these new technologies are applied to more complex supply chains and embedded within other software applications. They often incorporate models from different vendors and training data of unknown origin.

Rapid advances in AI can leave organizations struggling to keep up, as existing standards often lag behind the capabilities of modern systems.

Instead of chasing certifications, businesses should focus on building programs with what we call strategic resiliency: flexible governance structures that can evolve with AI technologies over time. Standards can provide useful guiding principles for these efforts, but companies should interpret them thoughtfully, focusing on core capabilities like inventory management, testing, validation and continuous monitoring.

In short, with respect to AI, standards are no longer meaningful as a “check mark” indicating reliability. But they remain useful to shape strategy and help companies prepare for future technological capabilities and regulations.

Opportunities to create value with Responsible AI

Organizations should consider standards that can serve as guides for their AI paths, getting ready for future regulatory requirements.

For example:

  • ISO/IEC 42001:2023. The first international standard for AI management systems, focusing on establishing, implementing, maintaining and improving an AI management system. It encourages a risk-based approach and addresses ethical responsibilities, transparency and accountability.
  • NIST AI Risk Management Framework (AI RMF 1.0). A voluntary framework developed by the U.S. National Institute of Standards and Technology. It offers guidelines for managing risks associated with AI, including considerations for fairness, privacy, accountability, robustness and security.
  • ISO/IEC 23894:2023. Offers comprehensive guidance for identifying, assessing and managing risks unique to AI systems — such as bias, transparency and safety—throughout their life cycle, helping organizations align AI development and use with established risk management practices like ISO 31000.
  • ISO/IEC 42005:2025. Provides guidance for organizations conducting AI system impact assessments. These assessments focus on understanding how AI systems — and their foreseeable applications — may affect individuals, groups or society at large. The standard supports transparency, accountability and trust in AI by helping organizations identify, evaluate and document potential impacts throughout the AI system lifecycle.

Participating in standards development efforts and industry consortia can also create opportunities to engage in conversations with other companies in the same industry. This work can provide a shared language and common understanding of challenges, opportunities and solutions. Even defining what “AI” means for your industry can be valuable for governance. These efforts can also help organizations understand how AI is reshaping their worlds and align with the Responsible AI practices that their peers are developing. In entertainment, for instance, AI hallucinations can be less of a problem than in healthcare, and the definitions of Responsible AI use will be very different in these two industries.

Finally, engaging with standards bodies and industry groups gives companies a chance to help shape standards as they evolve. These groups provide a channel for companies to share their insights and confirm that future standards are aligned with real use cases. It also positions them as leaders in Responsible AI.

Key actions to prioritize

By engaging with industry standards strategically, companies can help manage today’s AI risks better. Here’s where to start.

  • Align AI governance standards with your business strategy. Don’t choose or apply standards blindly. Clarify how AI supports your business goals — and tailor governance practices to fit that strategic direction.
  • Build programs for adaptability and resilience. Design your AI risk and compliance programs with the understanding that they’ll need to evolve. Assume that your AI use cases, risks and the regulatory environment will change, and plan for that. What you want to do in year one will almost certainly look very different in years two and three.
  • Use standards as flexible frameworks. Consider industry standards (like ISO 42001 or the NIST AI Risk Management Framework) as toolkits, not rulebooks. Focus on core capabilities such as inventorying AI assets, testing and validation, and monitoring, rather than checkbox compliance.
  • Engage with industry groups to develop leading practices and guide standards development. It promotes collaboration, gives you early insight into upcoming regulations and helps you see how your efforts stack up against others so you can stay in step with evolving expectations.
  • Invest in knowledgeable teams or partners. Given the complexity and pace of change, confirm you have access to people with the appropriate expertise who can scan the horizon, interpret evolving standards and advise on smart adaptation.

How we can help

By aligning standards with business goals, designing flexible programs and actively participating in industry groups, companies can better navigate AI risks and lead in shaping future practices. PwC can help organizations make the most of evolving standards with deep expertise, strategic insight and tailored support to build forward-looking Responsible AI programs.

Rohan Sen

Principal, Data Risk and Responsible AI, PwC US

Email

Ilana Golbin

Director and Responsible AI Lead, PwC US

Email

Tracy Tse

Partner, Assurance, PwC US

Email

Pieter Penning

Partner, Cyber, Risk and Regulatory, PwC US

Email

Follow us