Surge of California laws alters AI regulatory landscape

The issue

California recently passed several AI-related laws, positioning the state as a leader in shaping how the technology is regulated both locally and nationally. Governor Gavin Newsom signed more than a dozen AI bills into law in September 2024, addressing a range of concerns including AI risk management, training data transparency, privacy, watermarking, deepfakes, robocalls and AI use in healthcare and education. In the process, he vetoed several bills including SB 1047, a controversial measure designed to regulate large-scale AI models.

This development continues a trend of states rushing to fill the policy void in the absence of preemptive federal AI legislation. Flexing its market power, California saw an opening to shape the US regulatory approach to a largely home-grown industry that's critical to its economy and future growth. Other states are sure to follow, further complicating the regulatory burden for AI developers and deployers in those jurisdictions.

Affected companies operating in multiple states should prepare for this quickly evolving, yet fragmented, regulatory landscape. To navigate it effectively, organizations should develop an agile governance program and compliance strategy that’s broad-based yet flexible to help meet most of these requirements.

The regulator’s take

Governor Newsom described the bills he signed into law as “the most comprehensive legislative package in the nation on this emerging industry — cracking down on deepfakes, requiring AI watermarking, protecting children and workers, and combating AI-generated misinformation.” Beyond addressing the risks, the legislation also seeks to foster innovation and opportunity. “California has led the world in GenAI innovation while working toward common-sense regulations for the industry and bringing GenAI tools to state workers, students and educators,” the governor noted, touting various AI safety initiatives announced in his 2023 executive order.

This overview offers a glimpse of some of the key bills recently enacted.

Category Bill number What it does

Transparency

AB 2013 Requires developers of public-facing GenAI systems to disclose detailed data set information used to train AI systems or services on their websites, applicable to both original creators and those who significantly modify existing systems
SB 942 Requires developers of GenAI systems with over one million monthly users to provide AI detection tools, watermark AI-generated content and allow users to disclose AI-generated content
AB 2355 Requires political ads using AI-generated or significantly altered content to include a clear disclosure of this fact

Privacy

AB 1008 Modifies the California Consumer Privacy Act (CCPA) definition of “personal information” to clarify that it includes data in physical formats, digital formats (text, image, audio, video) and abstract digital formats (compressed/encrypted files, metadata, and AI systems that can output personal information)
SB 1223 Modifies the CCPA definition of “sensitive personal information” to add “neural data” (info generated from measurements of nervous system activity)
Healthcare SB 1120 Regulates AI use by health plans and insurers for utilization review and management decisions, including requiring that AI use be based on a patient’s medical or other clinical history and individual clinical circumstances as presented by the requesting provider and can’t supplant healthcare provider decisions
AB 3030 Requires specified healthcare providers to disclose GenAI use in communications to a patient pertaining to patient clinical information

Disinformation and deepfakes

AB 2655 Requires large online platforms to either remove or label deceptive AI-generated election content and provides mechanisms for reporting such content
AB 2839 Prohibits distribution of materially deceptive AI-generated election communications
SB 926 Criminalizes distribution of nonconsensual photorealistic intimate imagery
SB 981 Requires social media platforms to provide tools for reporting and removing “sexually explicit digital identity theft”
AB 1836 Prohibits the production and distribution of digital replicas of a deceased persons voice or likeness
AB 2602 Declares unenforceable any nonspecific contracts regarding digital replicas where the subject isn’t represented by counsel or a labor union
SB 1381 Expands the scope of existing child pornography statutes to include matter that’s digitally altered or generated by the use of AI
Education AB 2876 Requires AI literacy to be included in mathematics, science and history-social science curriculum frameworks and instructional materials
SB 1288 Creates a working group to develop guidance and a model policy for safe and effective use of AI in public schools
AI definition AB 2885 Establishes a uniform definition of AI under California law

One of the more consequential measures, AB 2013, requires developers of GenAI systems to post on their website documentation regarding the data used to train the system. This includes:

  • The sources and owners of the data sets
  • A description of how the data sets improved the GenAI system
  • The number and types of data points included in the data sets
  • Whether the data sets include any data protected by copyright, trademark or patent, or are entirely in the public domain
  • Whether the developer purchased or licensed the data sets
  • Whether the data sets include personal information or aggregate consumer information
  • Whether the GenAI system used or uses synthetic data generation in its development

The measure takes effect January 1, 2026, and applies thereafter each time a GenAI system or a substantial modification to such a system is made publicly available to Californians.

Another significant bill is SB 942, the California AI Transparency Act, which applies to GenAI systems that have over one million monthly visitors and are publicly available within California. It requires providers to make available at no cost an AI detection tool that allows users to assess whether content was created or altered by the system. It also requires providers to offer users the option to include in content created by the system a “manifest disclosure” identifying the content as AI-generated. Providers must also include in this content a latent disclosure that’s detectable by the tool described above and is, to the extent technically feasible, “permanent or extraordinarily difficult to remove.”

Violations are subject to a civil penalty of $5,000 per violation, per day. The law takes effect on January 1, 2026.

Although supportive of these and other guardrails against AI risks, the governor vetoed SB 1047, a sweeping bill aimed at large AI models that could have set the standard for other states and Congress. The bill would have imposed many developer obligations comparable to those under the EU AI Act — e.g., mandatory risk assessments, compliance audits, incident reporting, cybersecurity protections and a “kill switch” shutdown capability — but it differed in several fundamental ways, including the lack of a risk-based approach for tailoring requirements to the level of potential harm. Objecting to the measure’s lack of risk-based criteria, the governor explained in his veto message:

While well-intentioned, SB 1047 does not take into account whether an Al system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.

Your next move

Navigating the uneven, shifting terrain of state AI laws will require a strategic approach. By taking the following steps, businesses can help better manage the risks associated with a diverse regulatory environment and position themselves as leaders in the responsible use of AI.

Consider these actions as you ready your organization to comply with AI requirements across multiple jurisdictions.

Assess your potential exposure under new requirements

Review existing and potential state AI requirements affecting your strategy, operations, product design and compliance programs to get a preliminary view on the mitigation lift. Create a matrix that maps these requirements to your existing programs and processes and identify gaps.

Develop a compliance strategy

Based on your potential exposure, create a plan for adapting your compliance program. Identify concrete workstreams and overlaps with other compliance obligations. Existing programs and processes can sometimes be expanded to include AI-specific measures, such as risk management, data management or cybersecurity. Consider a solve-once-and-for-all strategy that meets the most stringent requirements, weigh the implications (e.g., slower pace of innovation, lost business opportunity) and decide whether to take that approach or develop a bespoke solution for specific jurisdictions.

Establish or enhance your AI governance model

Establish or enhance your AI governance model and integrate it with your broader enterprise risk management (ERM). A critical and foundational step to developing a governance model is aligning the roles and responsibilities of existing teams and defining new ones to support oversight.

Prepare for increased demands for transparency

If your organization faces new AI disclosure obligations, document your processes and controls, and assess their readiness for external reporting. To identify gaps, consider adding a layer of independent, ongoing oversight of your disclosure controls — whether from specially trained internal audit teams or experienced third-party specialists — starting with areas of highest risk. Make sure your public statements and internal practices are aligned to stand up to increasing scrutiny from regulators, customers and the media.

Monitor and adapt to evolving standards

Track emerging requirements. Conduct regular scenario planning exercises to prepare for the possibility of changes in AI regulations. This can help you quickly adapt to new laws and maintain operational continuity. Design with Responsible AI principles in mind, as that sets the baseline for your ability to be responsive to these requirements.

Download the executive summary

Download the report

Follow us