This is the fourth in a series of articles focused on how Responsible AI is enhancing risk functions to deliver value and AI innovation.
AI continues to evolve at an astounding rate, as do the regulatory landscapes that govern it. Managing compliance and anticipating future regulations across state, national and international domains is expected to be increasingly complex for the foreseeable future. This shift will likely require a focus on monitoring, interpretation and adaptation of internal controls as the rules change. The burden can be even more acute for small and medium enterprises.
The picture is not a simple one. Some geographies and industries may see trends toward deregulation, while others may have to deal with increasing regulatory burdens. Compliance requirements and definitions often differ among various frameworks. Companies should take into account the rapid pace of technological change and how these regulatory frameworks differ.
The sheer speed of evolution in AI technologies and the increasingly complex regulatory landscape create two interlocking challenges.
Companies should monitor advancements in AI technologies, how those advances may be impacted by current and future regulations, and what that might mean for their organization in terms of both operations and compliance. Knowing the potential technological changes and their possible near-, middle- and long-term implications for your organization will help you develop an AI strategy that is flexible and agile. Developers and deployers may also be impacted as it is likely that as the complexity, and often the risk level of the technology, increases, so might the regulatory requirements around transparency and documentation.
At the same time as these massive evolutionary shifts are occurring across the technology, regulatory fragmentation is creating a separate and opposing force, particularly for multinational organizations, raising the possibility that they may face an increasing array of complex compliance issues.
The US federal administration is moving toward less regulation as a means to drive AI innovation and economic growth, whereas Europe’s regulations (namely, the EU AI Act and its associated corollaries) create a holistic structure of regulations that center on human well-being, democratic processes and fundamental rights. At the same time, other countries are advancing their own distinct approaches. This divergence prompts companies to plan across multiple regulatory trajectories — each with different timelines, definitions and compliance expectations.
To add further complexity, companies operating in the US are navigating a fractured landscape of state and local laws and regulations, many of which do not directly impact AI, but which have AI-relevant implications and applications, e.g., workplace and employment, consumer protection, etc.
Additionally, compliance with AI regulations involves numerous interdependencies throughout the AI value chain — from developers and distributors to deployers and end users. Maintaining regulatory adherence across these stakeholders can introduce significant complexity into vendor management and contractual agreements, further complicating an already challenging compliance environment.
By adopting a proactive regulatory readiness posture, compliance teams can potentially mitigate fines, costly enforcement actions, legal costs and other adverse consequences. Outside of these benefits, there are other opportunities for regulatory and compliance teams to unlock value by enabling the responsible use of AI.
Building trust. The consumer relationship is founded on trust, and companies that increasingly rely on AI should find ways to build that trust into their system if they want to maintain that trust factor with their customers. Organizations that proactively demonstrate Responsible AI practices are likely to better align with customer values, which can build trust and provide meaningful differentiation in competitive markets.
Reducing future technical debt. Adopting a strategic, principled approach to AI regulatory readiness also is likely to reduce the risk of incurring significant operational costs associated with redesigning or retrofitting systems later — costs that could disrupt product development and innovation efforts.
Regardless of the size of the company or the industry, there are six moves that are foundational for AI governance and risk management that can provide durable value across many different evolving regulatory regimes. For example:
By investing in strong governance, cross-functional collaboration and Responsible AI practices, companies can mitigate risks, build consumer trust and support their operations in an increasingly complex global landscape. PwC can help organizations navigate this evolving environment with strategic guidance, regulatory insights and practical tools to help build resilient Responsible AI programs that drive lasting value.
Embrace AI-driven transformation while managing the risk, from strategy through execution.