{{item.title}}
{{item.text}}
{{item.text}}
California recently passed several AI-related laws, positioning the state as a leader in shaping how the technology is regulated both locally and nationally. Governor Gavin Newsom signed more than a dozen AI bills into law in September 2024, addressing a range of concerns including AI risk management, training data transparency, privacy, watermarking, deepfakes, robocalls and AI use in healthcare and education. In the process, he vetoed several bills including SB 1047, a controversial measure designed to regulate large-scale AI models.
This development continues a trend of states rushing to fill the policy void in the absence of preemptive federal AI legislation. Flexing its market power, California saw an opening to shape the US regulatory approach to a largely home-grown industry that's critical to its economy and future growth. Other states are sure to follow, further complicating the regulatory burden for AI developers and deployers in those jurisdictions.
Affected companies operating in multiple states should prepare for this quickly evolving, yet fragmented, regulatory landscape. To navigate it effectively, organizations should develop an agile governance program and compliance strategy that’s broad-based yet flexible to help meet most of these requirements.
Governor Newsom described the bills he signed into law as “the most comprehensive legislative package in the nation on this emerging industry — cracking down on deepfakes, requiring AI watermarking, protecting children and workers, and combating AI-generated misinformation.” Beyond addressing the risks, the legislation also seeks to foster innovation and opportunity. “California has led the world in GenAI innovation while working toward common-sense regulations for the industry and bringing GenAI tools to state workers, students and educators,” the governor noted, touting various AI safety initiatives announced in his 2023 executive order.
This overview offers a glimpse of some of the key bills recently enacted.
| Category | Bill number | What it does |
|---|---|---|
Transparency |
AB 2013 | Requires developers of public-facing GenAI systems to disclose detailed data set information used to train AI systems or services on their websites, applicable to both original creators and those who significantly modify existing systems |
| SB 942 | Requires developers of GenAI systems with over one million monthly users to provide AI detection tools, watermark AI-generated content and allow users to disclose AI-generated content | |
| AB 2355 | Requires political ads using AI-generated or significantly altered content to include a clear disclosure of this fact | |
Privacy |
AB 1008 | Modifies the California Consumer Privacy Act (CCPA) definition of “personal information” to clarify that it includes data in physical formats, digital formats (text, image, audio, video) and abstract digital formats (compressed/encrypted files, metadata, and AI systems that can output personal information) |
| SB 1223 | Modifies the CCPA definition of “sensitive personal information” to add “neural data” (info generated from measurements of nervous system activity) | |
| Healthcare | SB 1120 | Regulates AI use by health plans and insurers for utilization review and management decisions, including requiring that AI use be based on a patient’s medical or other clinical history and individual clinical circumstances as presented by the requesting provider and can’t supplant healthcare provider decisions |
| AB 3030 | Requires specified healthcare providers to disclose GenAI use in communications to a patient pertaining to patient clinical information | |
Disinformation and deepfakes |
AB 2655 | Requires large online platforms to either remove or label deceptive AI-generated election content and provides mechanisms for reporting such content |
| AB 2839 | Prohibits distribution of materially deceptive AI-generated election communications | |
| SB 926 | Criminalizes distribution of nonconsensual photorealistic intimate imagery | |
| SB 981 | Requires social media platforms to provide tools for reporting and removing “sexually explicit digital identity theft” | |
| AB 1836 | Prohibits the production and distribution of digital replicas of a deceased persons voice or likeness | |
| AB 2602 | Declares unenforceable any nonspecific contracts regarding digital replicas where the subject isn’t represented by counsel or a labor union | |
| SB 1381 | Expands the scope of existing child pornography statutes to include matter that’s digitally altered or generated by the use of AI | |
| Education | AB 2876 | Requires AI literacy to be included in mathematics, science and history-social science curriculum frameworks and instructional materials |
| SB 1288 | Creates a working group to develop guidance and a model policy for safe and effective use of AI in public schools | |
| AI definition | AB 2885 | Establishes a uniform definition of AI under California law |
Navigating the uneven, shifting terrain of state AI laws will require a strategic approach. By taking the following steps, businesses can help better manage the risks associated with a diverse regulatory environment and position themselves as leaders in the responsible use of AI.
Consider these actions as you ready your organization to comply with AI requirements across multiple jurisdictions.
Review existing and potential state AI requirements affecting your strategy, operations, product design and compliance programs to get a preliminary view on the mitigation lift. Create a matrix that maps these requirements to your existing programs and processes and identify gaps.
Based on your potential exposure, create a plan for adapting your compliance program. Identify concrete workstreams and overlaps with other compliance obligations. Existing programs and processes can sometimes be expanded to include AI-specific measures, such as risk management, data management or cybersecurity. Consider a solve-once-and-for-all strategy that meets the most stringent requirements, weigh the implications (e.g., slower pace of innovation, lost business opportunity) and decide whether to take that approach or develop a bespoke solution for specific jurisdictions.
Establish or enhance your AI governance model and integrate it with your broader enterprise risk management (ERM). A critical and foundational step to developing a governance model is aligning the roles and responsibilities of existing teams and defining new ones to support oversight.
If your organization faces new AI disclosure obligations, document your processes and controls, and assess their readiness for external reporting. To identify gaps, consider adding a layer of independent, ongoing oversight of your disclosure controls — whether from specially trained internal audit teams or experienced third-party specialists — starting with areas of highest risk. Make sure your public statements and internal practices are aligned to stand up to increasing scrutiny from regulators, customers and the media.
Track emerging requirements. Conduct regular scenario planning exercises to prepare for the possibility of changes in AI regulations. This can help you quickly adapt to new laws and maintain operational continuity. Design with Responsible AI principles in mind, as that sets the baseline for your ability to be responsive to these requirements.
{{item.text}}
{{item.text}}