Responsible AI and regulatory readiness: what you need to know

te pattern chevrons rose mobile
te pattern chevrons rose

Summary

  • AI and regulations are advancing rapidly, leading to complex, shifting compliance demands across global, national and local levels.
  • Managing AI compliance involves multiple layers of the AI value chain, from developers to end users.
  • Adopting Responsible AI practices helps mitigate risks, reduce future technical debt and strengthen consumer trust.

This is the fourth in a series of articles focused on how Responsible AI is enhancing risk functions to deliver value and AI innovation.

AI continues to evolve at an astounding rate, as do the regulatory landscapes that govern it. Managing compliance and anticipating future regulations across state, national and international domains is expected to be increasingly complex for the foreseeable future. This shift will likely require a focus on monitoring, interpretation and adaptation of internal controls as the rules change. The burden can be even more acute for small and medium enterprises.

The picture is not a simple one. Some geographies and industries may see trends toward deregulation, while others may have to deal with increasing regulatory burdens. Compliance requirements and definitions often differ among various frameworks. Companies should take into account the rapid pace of technological change and how these regulatory frameworks differ.

How AI is rapidly changing the status quo for regulation

The sheer speed of evolution in AI technologies and the increasingly complex regulatory landscape create two interlocking challenges.

Technological evolution

Companies should monitor advancements in AI technologies, how those advances may be impacted by current and future regulations, and what that might mean for their organization in terms of both operations and compliance. Knowing the potential technological changes and their possible near-, middle- and long-term implications for your organization will help you develop an AI strategy that is flexible and agile. Developers and deployers may also be impacted as it is likely that as the complexity, and often the risk level of the technology, increases, so might the regulatory requirements around transparency and documentation.

Regulatory complexity

At the same time as these massive evolutionary shifts are occurring across the technology, regulatory fragmentation is creating a separate and opposing force, particularly for multinational organizations, raising the possibility that they may face an increasing array of complex compliance issues.

The US federal administration is moving toward less regulation as a means to drive AI innovation and economic growth, whereas Europe’s regulations (namely, the EU AI Act and its associated corollaries) create a holistic structure of regulations that center on human well-being, democratic processes and fundamental rights. At the same time, other countries are advancing their own distinct approaches. This divergence prompts companies to plan across multiple regulatory trajectories — each with different timelines, definitions and compliance expectations.

To add further complexity, companies operating in the US are navigating a fractured landscape of state and local laws and regulations, many of which do not directly impact AI, but which have AI-relevant implications and applications, e.g., workplace and employment, consumer protection, etc.

Additionally, compliance with AI regulations involves numerous interdependencies throughout the AI value chain — from developers and distributors to deployers and end users. Maintaining regulatory adherence across these stakeholders can introduce significant complexity into vendor management and contractual agreements, further complicating an already challenging compliance environment.

The opportunities for Responsible AI in regulatory readiness

By adopting a proactive regulatory readiness posture, compliance teams can potentially mitigate fines, costly enforcement actions, legal costs and other adverse consequences. Outside of these benefits, there are other opportunities for regulatory and compliance teams to unlock value by enabling the responsible use of AI.

Building trust. The consumer relationship is founded on trust, and companies that increasingly rely on AI should find ways to build that trust into their system if they want to maintain that trust factor with their customers. Organizations that proactively demonstrate Responsible AI practices are likely to better align with customer values, which can build trust and provide meaningful differentiation in competitive markets.

Reducing future technical debt. Adopting a strategic, principled approach to AI regulatory readiness also is likely to reduce the risk of incurring significant operational costs associated with redesigning or retrofitting systems later — costs that could disrupt product development and innovation efforts.

Key actions to prioritize

Regardless of the size of the company or the industry, there are six moves that are foundational for AI governance and risk management that can provide durable value across many different evolving regulatory regimes. For example:

  • Create a cross-functional AI governance structure that follows Responsible AI principles. Work across business units and between leadership and operations to facilitate cohesive and coordinated AI compliance efforts across the organization. Establish a holistic AI policy that outlines the responsible use of AI across your organization, clearly outlining business rules for the appropriate use of AI.
  • Create an inventory of AI technologies and use cases across your organization. Confirm you have processes and infrastructure in place to help keep the inventory up to date. Invest in tools to enhance documentation and auditability.
  • Establish a standardized, risk-based methodology to classify and assess AI risks. Select a methodology — one that considers key regulatory frameworks (such as the EU AI Act) and recognized industry standards (such as the NIST AI Risk Management Framework) — to classify and assess risks in AI technologies and their specific use cases.
  • Stay on top of new and upcoming regulations. Staying engaged with industry associations, working with public sector stakeholders and similar outreach efforts can help companies stay up to the moment on what’s happening in the regulatory space and the possible impact of proposed or potential regulations.
  • Strengthen due diligence and contractual terms with AI vendors and clearly define accountability. Revisit contracts and relationships with third parties to understand compliance obligations and understand possible risks and gaps introduced by AI usage. Monitor vendor compliance to mitigate regulatory risks from these third parties.
  • Increase AI literacy across your organization. Provide training and upskilling for employees using AI, based on their roles, skill sets and the different ways that they use AI.

How we can help

By investing in strong governance, cross-functional collaboration and Responsible AI practices, companies can mitigate risks, build consumer trust and support their operations in an increasingly complex global landscape. PwC can help organizations navigate this evolving environment with strategic guidance, regulatory insights and practical tools to help build resilient Responsible AI programs that drive lasting value.

Trust to the power of Responsible AI

Embrace AI-driven transformation while managing the risk, from strategy through execution.

Rohan Sen

Principal, Data Risk and Responsible AI, PwC US

Email

Ingrid Carlson

Director, Responsible AI, PwC US

Email

Phillip Gao

Director, Data Risk and Responsible AI, PwC US

Email

Next and previous component will go here

Follow us