{{item.title}}
{{item.text}}
{{item.text}}
By Sara Putnam, Matt Gorham, and Jake Meek
A surge of tech policy actions in California is causing ripple effects far beyond its borders. The state’s privacy watchdog, the California Privacy Protection Agency (CPPA), recently announced final rules mandating privacy risk assessments, cybersecurity audits, and guardrails around automated decision-making technology (ADMT). Another agency, the California Civil Rights Council, has published final rules governing the use of algorithms in employment decisions. And Governor Gavin Newsom has signed groundbreaking legislation imposing safety requirements on frontier AI models.
These developments continue a trend of US states rushing to fill the policy void in the absence of preemptive federal AI and privacy legislation. Although the Trump administration continues to pursue deregulation at the federal level, state regulations will remain in force unless Congress intervenes.
Companies operating in multiple states should prepare for the new requirements. They can start by assessing their potential exposure and developing an agile governance program and compliance strategy that’s broad-based yet flexible enough to adapt to these as well as other state, federal, and international obligations.
On September 23, 2025, the CPPA announced final rules under the California Consumer Privacy Act (CCPA), imposing ADMT guardrails and requiring affected businesses to conduct risk assessments and annual cyber audits. It unanimously approved the rule package, the culmination of nearly two years of drafting, comments, and debate. Final changes narrow the scope of ADMT requirements by removing references to AI and behavioral advertising, expanding the scope of when businesses can use ADMT and scaling back when consumers can opt out. The changes also phase in compliance obligations for cyber audits over several years.
In related developments, the California Civil Rights Council (CCRC) published rules governing the use of AI or algorithms in employment decisions, which took effect on October 1, 2025. The rules clarify that the use of an automated-decision system may violate California law if it harms applicants or employees based on protected characteristics such as gender, race, or disability.
Also, on September 29, 2025, Governor Newsom signed into law Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA). Among other things, TFAIA requires large frontier model developers to publish on their website an AI framework describing the company’s process for assessing whether the model could pose catastrophic risk. It also imposes transparency and incident reporting requirements and protects whistleblowers from retaliation.
| Topic | Rule or provision | What you need to know | Effective and compliance dates |
ADMT
|
Art. 11, CCPA regulations | Businesses that use or rely on ADMT in making “significant decisions” about a consumer—e.g., affecting access to employment, housing, credit, health care, education, insurance, or essential goods—have to provide detailed pre-use notice of ADMT, offer an opt-out mechanism, and furnish additional individualized information about its ADMT use on request. “ADMT” means technology that processes personal information and replaces or substantially replaces human decision-making. Although the definition omits references to “artificial intelligence” and “behavioral advertising,” it remains broad enough to capture machine learning models, rule-based scoring systems, and facial recognition. |
January 1, 2027
|
Risk assessments
|
|
Businesses whose data processing could pose “significant risk to consumers’ privacy” must conduct written risk assessments before undertaking certain high-risk data processing activities (e.g., selling or sharing personal information, processing sensitive personal information, using ADMT for significant decisions, training ADMT to identify, infer traits, or analyze emotion or facial recognition). The assessment must identify the purposes, benefits, reasonably foreseeable risks, and proposed safeguards related to the processing, as well as to operational elements like collection process, retention periods, number of consumers impacted, and disclosures made to consumers. Businesses have to submit all risk assessments to the CPPA by April 1, 2028 (for assessments conducted in 2026 and 2027) or April 1 of the following year (for assessments conducted in 2028 onward). |
January 1, 2026 First filing due by April 1, 2028
|
Cyber audits
|
|
Businesses whose data processing could pose “significant risk to consumers’ security” are required to complete an independent cybersecurity audit annually. Audits should be based on evidence, not attestations, and conducted by a qualified, objective, and independent professional (who may be external or internal, but if internal, they can’t be responsible for the cyber program). The audit should test controls across areas such as MFA, encryption of personal information, retention and disposal of personal information, access management, vulnerability testing, incident response, and vendor oversight. Businesses can leverage audits prepared for another purpose under existing frameworks (e.g., NIST CSF 2.0, SOC 2 Type II, ISO 27001) if scope and independence requirements of the final rule are met. A senior executive is required to certify the audit’s completion, with the certification to be filed with the CPPA by staggered deadlines based on the company’s annual revenue. |
Certifications to the CPPA by: April 1, 2028, if the business makes more than $100M; April 1, 2029, if the business makes between $50M and $100M; or April 1, 2030, if the business makes less than $50M |
Existing CCPA regulations
|
|
Updates to existing CCPA regulations include requirements that:
|
January 1, 2027
|
Employment decisions
|
|
Employers are prohibited from using automated-decision systems (ADS) that discriminate against applicants or employees based on protected categories defined under California’s Fair Employment and Housing Act (FEHA). Employers may also have to provide reasonable accommodations consistent with FEHA’s religious and disability protections. “ADS” means a computational process that makes a decision or facilitates human decision-making regarding an employment benefit. It may include AI, machine learning, algorithms, statistics, or other data processing techniques. Employers must preserve ADS-related records for four years after creating the record or making the personnel decision at issue, whichever is later. Anti-bias testing, or the lack of it, is relevant to a claim of employment discrimination or an available defense. |
October 1, 2025
|
Frontier AI models
|
|
Large frontier model developers must publish and keep current a framework on their website describing the company’s process for assessing whether the model could pose catastrophic risk and how it will identify and respond to “critical safety incidents,” among other things. They must also publish a transparency report whenever they release a new or substantially modified frontier model that summarizes their assessments of catastrophic risks, among other things. In addition, they have to notify the government of any critical safety incidents within 15 days of its discovery, or within 24 hours in the case of imminent risk of death or serious injury. “Frontier model” means a foundation model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations. “Large frontier developer” means a frontier developer that had annual gross revenues exceeding $500M in the preceding calendar year. Employees of frontier developers who report significant health and safety risks posed by frontier models are protected from retaliation. Developers are required to provide whistleblowers with anonymous reporting channels. A newly formed consortium is charged with designing a public computing cluster, “CalCompute,” to support safe, ethical, equitable, and sustainable AI innovation. |
January 1, 2026
|
Navigating the uneven, shifting terrain of state AI, privacy and cyber expectations will require a strategic approach. By taking the following steps, businesses can help better manage the risks associated with a diverse regulatory environment.
{{item.text}}
{{item.text}}