
Managing the risks of generative AI
What security, privacy, internal audit, legal, finance and compliance leaders need to know to harness trusted generative artificial intelligence.
California recently passed several AI-related laws, positioning the state as a leader in shaping how the technology is regulated both locally and nationally. Governor Gavin Newsom signed more than a dozen AI bills into law in September 2024, addressing a range of concerns including AI risk management, training data transparency, privacy, watermarking, deepfakes, robocalls and AI use in healthcare and education. In the process, he vetoed several bills including SB 1047, a controversial measure designed to regulate large-scale AI models.
This development continues a trend of states rushing to fill the policy void in the absence of preemptive federal AI legislation. Flexing its market power, California saw an opening to shape the US regulatory approach to a largely home-grown industry that's critical to its economy and future growth. Other states are sure to follow, further complicating the regulatory burden for AI developers and deployers in those jurisdictions.
Affected companies operating in multiple states should prepare for this quickly evolving, yet fragmented, regulatory landscape. To navigate it effectively, organizations should develop an agile governance program and compliance strategy that’s broad-based yet flexible to help meet most of these requirements.
Governor Newsom described the bills he signed into law as “the most comprehensive legislative package in the nation on this emerging industry — cracking down on deepfakes, requiring AI watermarking, protecting children and workers, and combating AI-generated misinformation.” Beyond addressing the risks, the legislation also seeks to foster innovation and opportunity. “California has led the world in GenAI innovation while working toward common-sense regulations for the industry and bringing GenAI tools to state workers, students and educators,” the governor noted, touting various AI safety initiatives announced in his 2023 executive order.
This overview offers a glimpse of some of the key bills recently enacted.
Category | Bill number | What it does |
---|---|---|
Transparency |
AB 2013 | Requires developers of public-facing GenAI systems to disclose detailed data set information used to train AI systems or services on their websites, applicable to both original creators and those who significantly modify existing systems |
SB 942 | Requires developers of GenAI systems with over one million monthly users to provide AI detection tools, watermark AI-generated content and allow users to disclose AI-generated content | |
AB 2355 | Requires political ads using AI-generated or significantly altered content to include a clear disclosure of this fact | |
Privacy |
AB 1008 | Modifies the California Consumer Privacy Act (CCPA) definition of “personal information” to clarify that it includes data in physical formats, digital formats (text, image, audio, video) and abstract digital formats (compressed/encrypted files, metadata, and AI systems that can output personal information) |
SB 1223 | Modifies the CCPA definition of “sensitive personal information” to add “neural data” (info generated from measurements of nervous system activity) | |
Healthcare | SB 1120 | Regulates AI use by health plans and insurers for utilization review and management decisions, including requiring that AI use be based on a patient’s medical or other clinical history and individual clinical circumstances as presented by the requesting provider and can’t supplant healthcare provider decisions |
AB 3030 | Requires specified healthcare providers to disclose GenAI use in communications to a patient pertaining to patient clinical information | |
Disinformation and deepfakes |
AB 2655 | Requires large online platforms to either remove or label deceptive AI-generated election content and provides mechanisms for reporting such content |
AB 2839 | Prohibits distribution of materially deceptive AI-generated election communications | |
SB 926 | Criminalizes distribution of nonconsensual photorealistic intimate imagery | |
SB 981 | Requires social media platforms to provide tools for reporting and removing “sexually explicit digital identity theft” | |
AB 1836 | Prohibits the production and distribution of digital replicas of a deceased persons voice or likeness | |
AB 2602 | Declares unenforceable any nonspecific contracts regarding digital replicas where the subject isn’t represented by counsel or a labor union | |
SB 1381 | Expands the scope of existing child pornography statutes to include matter that’s digitally altered or generated by the use of AI | |
Education | AB 2876 | Requires AI literacy to be included in mathematics, science and history-social science curriculum frameworks and instructional materials |
SB 1288 | Creates a working group to develop guidance and a model policy for safe and effective use of AI in public schools | |
AI definition | AB 2885 | Establishes a uniform definition of AI under California law |
Navigating the uneven, shifting terrain of state AI laws will require a strategic approach. By taking the following steps, businesses can help better manage the risks associated with a diverse regulatory environment and position themselves as leaders in the responsible use of AI.
Consider these actions as you ready your organization to comply with AI requirements across multiple jurisdictions.
Review existing and potential state AI requirements affecting your strategy, operations, product design and compliance programs to get a preliminary view on the mitigation lift. Create a matrix that maps these requirements to your existing programs and processes and identify gaps.
What security, privacy, internal audit, legal, finance and compliance leaders need to know to harness trusted generative artificial intelligence.
Unlock the full potential of artificial intelligence at scale—in a way you can trust.
PwC's Tech Effect is a digital resource for busy leaders: your guide to growth in a people-led, tech-powered world.
Next Move discusses the latest regulatory and technology policy developments and how risk leaders can react. Read the latest issue on Responsible AI.