
Trust to the power of Responsible AI
Embrace AI-driven transformation while managing the risk, from strategy through execution.
Learn moreThis is the third in a series of articles focused on how Responsible AI is enhancing risk functions to deliver value and AI innovation.
The rapid pace of evolution in AI and the extensive proliferation of use cases has far outstripped the ability for industry standards bodies to keep up.
Given that most industry standards for AI are voluntary (as opposed to regulations that carry legal weight), companies and risk leaders should weigh the cost and time involved in each certification — now more than ever.
Standards can still be valuable anchor points for companies that want external validation or confirmation that they’re on the right path. They can be vital tools for providing structure to AI growth trajectories, preparing for compliance with future regulations, connecting with industry peers and helping to shape the next generation of Responsible AI practices. But organizations need to engage with industry standards more strategically than they did just a few years ago, before the rapid proliferation of AI technologies.
This is the third in a series of articles focused on how Responsible AI is enhancing risk functions to deliver value and AI innovation.
The rapid pace of evolution in AI and the extensive proliferation of use cases has far outstripped the ability for industry standards bodies to keep up.
Given that most industry standards for AI are voluntary (as opposed to regulations that carry legal weight), companies and risk leaders should weigh the cost and time involved in each certification — now more than ever.
Standards can still be valuable anchor points for companies that want external validation or confirmation that they’re on the right path. They can be vital tools for providing structure to AI growth trajectories, preparing for compliance with future regulations, connecting with industry peers and helping to shape the next generation of Responsible AI practices. But organizations need to engage with industry standards more strategically than they did just a few years ago, before the rapid proliferation of AI technologies.
For years, industry standards have served as clear targets for organizations — a way to demonstrate quality, mitigate risk and establish trust with stakeholders. Meeting these standards, while perceived laborious, was a relatively straightforward process: Align operations to the relevant frameworks, achieve certification and use the certification to signal reliability to customers and regulators alike.
For the last few years, standards bodies have continued to drive frameworks for AI with this model in mind. However, the pace of change in AI has disrupted the utility of this approach. Many existing or in-progress AI standards were designed to address narrow, specific applications of AI and were often aspirational in the structures they required.
These frameworks did not anticipate today's generative AI or agentic AI systems, which are far more complex, dynamic and versatile. In addition, these new technologies are applied to more complex supply chains and embedded within other software applications. They often incorporate models from different vendors and training data of unknown origin.
Rapid advances in AI can leave organizations struggling to keep up, as existing standards often lag behind the capabilities of modern systems.
Instead of chasing certifications, businesses should focus on building programs with what we call strategic resiliency: flexible governance structures that can evolve with AI technologies over time. Standards can provide useful guiding principles for these efforts, but companies should interpret them thoughtfully, focusing on core capabilities like inventory management, testing, validation and continuous monitoring.
In short, with respect to AI, standards are no longer meaningful as a “check mark” indicating reliability. But they remain useful to shape strategy and help companies prepare for future technological capabilities and regulations.
Organizations should consider standards that can serve as guides for their AI paths, getting ready for future regulatory requirements.
For example:
Participating in standards development efforts and industry consortia can also create opportunities to engage in conversations with other companies in the same industry. This work can provide a shared language and common understanding of challenges, opportunities and solutions. Even defining what “AI” means for your industry can be valuable for governance. These efforts can also help organizations understand how AI is reshaping their worlds and align with the Responsible AI practices that their peers are developing. In entertainment, for instance, AI hallucinations can be less of a problem than in healthcare, and the definitions of Responsible AI use will be very different in these two industries.
Finally, engaging with standards bodies and industry groups gives companies a chance to help shape standards as they evolve. These groups provide a channel for companies to share their insights and confirm that future standards are aligned with real use cases. It also positions them as leaders in Responsible AI.
By engaging with industry standards strategically, companies can help manage today’s AI risks better. Here’s where to start.
By aligning standards with business goals, designing flexible programs and actively participating in industry groups, companies can better navigate AI risks and lead in shaping future practices. PwC can help organizations make the most of evolving standards with deep expertise, strategic insight and tailored support to build forward-looking Responsible AI programs.