California makes waves: tech guardrails, privacy assessments, and cyber audits

The issue

A surge of tech policy actions in California is causing ripple effects far beyond its borders. The state’s privacy watchdog, the California Privacy Protection Agency (CPPA), recently announced final rules mandating privacy risk assessments, cybersecurity audits, and guardrails around automated decision-making technology (ADMT). Another agency, the California Civil Rights Council, has published final rules governing the use of algorithms in employment decisions. And Governor Gavin Newsom has signed groundbreaking legislation imposing safety requirements on frontier AI models.

These developments continue a trend of US states rushing to fill the policy void in the absence of preemptive federal AI and privacy legislation. Although the Trump administration continues to pursue deregulation at the federal level, state regulations will remain in force unless Congress intervenes.

Companies operating in multiple states should prepare for the new requirements. They can start by assessing their potential exposure and developing an agile governance program and compliance strategy that’s broad-based yet flexible enough to adapt to these as well as other state, federal, and international obligations.

The regulator’s take

On September 23, 2025, the CPPA announced final rules under the California Consumer Privacy Act (CCPA), imposing ADMT guardrails and requiring affected businesses to conduct risk assessments and annual cyber audits. It unanimously approved the rule package, the culmination of nearly two years of drafting, comments, and debate. Final changes narrow the scope of ADMT requirements by removing references to AI and behavioral advertising, expanding the scope of when businesses can use ADMT and scaling back when consumers can opt out. The changes also phase in compliance obligations for cyber audits over several years.

In related developments, the California Civil Rights Council (CCRC) published rules governing the use of AI or algorithms in employment decisions, which took effect on October 1, 2025. The rules clarify that the use of an automated-decision system may violate California law if it harms applicants or employees based on protected characteristics such as gender, race, or disability.

Also, on September 29, 2025, Governor Newsom signed into law Senate Bill 53, the Transparency in Frontier Artificial Intelligence Act (TFAIA). Among other things, TFAIA requires large frontier model developers to publish on their website an AI framework describing the company’s process for assessing whether the model could pose catastrophic risk. It also imposes transparency and incident reporting requirements and protects whistleblowers from retaliation.

What do the California tech, privacy and cyber rules say?

Topic Rule or provision What you need to know Effective and compliance dates

ADMT

 

 

Art. 11, CCPA regulations

Businesses that use or rely on ADMT in making “significant decisions” about a consumer—e.g., affecting access to employment, housing, credit, health care, education, insurance, or essential goods—have to provide detailed pre-use notice of ADMT, offer an opt-out mechanism, and furnish additional individualized information about its ADMT use on request.

“ADMT” means technology that processes personal information and replaces or substantially replaces human decision-making. Although the definition omits references to “artificial intelligence” and “behavioral advertising,” it remains broad enough to capture machine learning models, rule-based scoring systems, and facial recognition.

January 1, 2027

 

 

Risk assessments

 

 

Art. 10, CCPA regulations

 

 

Businesses whose data processing could pose “significant risk to consumers’ privacy” must conduct written risk assessments before undertaking certain high-risk data processing activities (e.g., selling or sharing personal information, processing sensitive personal information, using ADMT for significant decisions, training ADMT to identify, infer traits, or analyze emotion or facial recognition). The assessment must identify the purposes, benefits, reasonably foreseeable risks, and proposed safeguards related to the processing, as well as to operational elements like collection process, retention periods, number of consumers impacted, and disclosures made to consumers.

Businesses have to submit all risk assessments to the CPPA by April 1, 2028 (for assessments conducted in 2026 and 2027) or April 1 of the following year (for assessments conducted in 2028 onward).

January 1, 2026

First filing due by April 1, 2028

 

 

Cyber audits

 

 

Art. 9, CCPA regulations

 

 

Businesses whose data processing could pose “significant risk to consumers’ security” are required to complete an independent cybersecurity audit annually. Audits should be based on evidence, not attestations, and conducted by a qualified, objective, and independent professional (who may be external or internal, but if internal, they can’t be responsible for the cyber program). The audit should test controls across areas such as MFA, encryption of personal information, retention and disposal of personal information, access management, vulnerability testing, incident response, and vendor oversight.

Businesses can leverage audits prepared for another purpose under existing frameworks (e.g., NIST CSF 2.0, SOC 2 Type II, ISO 27001) if scope and independence requirements of the final rule are met. 

A senior executive is required to certify the audit’s completion, with the certification to be filed with the CPPA by staggered deadlines based on the company’s annual revenue.

Certifications to the CPPA by: 

April 1, 2028, if the business makes more than $100M;

April 1, 2029, if the business makes between $50M and $100M; or

April 1, 2030, if the business makes less than $50M

Existing CCPA regulations

 

 

Art. 1-8, CCPA regulations

 

 

Updates to existing CCPA regulations include requirements that:

  • Any requests to opt out of data sale or sharing should take the same or fewer steps than the method to opt-in
  • Links to a company’s privacy policy have to appear on any webpage that collects personal information (not just the home page)

  • Businesses must clearly display whether they’ve honored a consumer’s request to opt out of sale or sharing when a consumer using an opt-out preference signal visits the website

  • Consumers should encounter notice of opt-out rights before or at the time data collection begins on connected devices

  • User interfaces must offer equal visual prominence for “yes” and “no” choices when asking for consent

  • Consumers may request from companies their personal information collected beyond the prior 12 months, if it exists

January 1, 2027

 

 

Employment decisions

 

 

Art. 1-10, CCRC regulations

 

 

Employers are prohibited from using automated-decision systems (ADS) that discriminate against applicants or employees based on protected categories defined under California’s Fair Employment and Housing Act (FEHA). Employers may also have to provide reasonable accommodations consistent with FEHA’s religious and disability protections. 

“ADS” means a computational process that makes a decision or facilitates human decision-making regarding an employment benefit. It may include AI, machine learning, algorithms, statistics, or other data processing techniques.

Employers must preserve ADS-related records for four years after creating the record or making the personnel decision at issue, whichever is later. 

Anti-bias testing, or the lack of it, is relevant to a claim of employment discrimination or an available defense.

October 1, 2025

 

 

Frontier AI models

 

 

Sec. 2-4, Senate Bill 53

 

 

Large frontier model developers must publish and keep current a framework on their website describing the company’s process for assessing whether the model could pose catastrophic risk and how it will identify and respond to “critical safety incidents,” among other things. They must also publish a transparency report whenever they release a new or substantially modified frontier model that summarizes their assessments of catastrophic risks, among other things. In addition, they have to notify the government of any critical safety incidents within 15 days of its discovery, or within 24 hours in the case of imminent risk of death or serious injury.

“Frontier model” means a foundation model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations. “Large frontier developer” means a frontier developer that had annual gross revenues exceeding $500M in the preceding calendar year.

Employees of frontier developers who report significant health and safety risks posed by frontier models are protected from retaliation. Developers are required to provide whistleblowers with anonymous reporting channels.

A newly formed consortium is charged with designing a public computing cluster, “CalCompute,” to support safe, ethical, equitable, and sustainable AI innovation.

January 1, 2026

 

 

Your next move

Navigating the uneven, shifting terrain of state AI, privacy and cyber expectations will require a strategic approach. By taking the following steps, businesses can help better manage the risks associated with a diverse regulatory environment.

CCPA actions

  • Inventory your decision tools and plan for ADMT compliance. Determine whether your decision tools are subject to the ADMT rules. If so, develop and implement notice and opt-out mechanisms as required. Institutionalize pre-deployment and ongoing evaluations of the technology for safety, security, reliability, and bias with clear SLAs.
  • Review personal data use cases and update your risk assessment program. Inventory data processing use cases and confirm which active initiatives are subject to CCPA risk assessments. Update or enhance these risk assessments to help meet new standards and confirm the sufficiency of mitigating controls. Update go-forward risk assessment program to flag in-scope use cases and modify workflows to support this new reporting requirement.
  • Assess your cyber audit readiness. Determine whether the cyber audit regulations apply to your organization. If so, identify any gaps between your current program and the final rule, then assess your compliance risk. Align your program to an industry standard such as the NIST Cybersecurity Framework 2.0 and tailor your capabilities accordingly. If your company operates in multiple jurisdictions, determine which ones set the highest bar for each program component and decide what’s necessary for compliance. Take steps to identify and onboard an independent auditor or, if you currently have an external auditor, begin audit planning, scoping, and mapping efforts.

CRCC actions 

  • Prepare for testing and record-retention requirements. If your employment decision tools are subject to the ADS rules, perform pre-deployment and ongoing evaluations for bias, safety, security, and reliability. Preserve ADS-related records―including system-generated data, bias testing results, assessments, and audits―for four years after creating the record or making the personnel decision in question, whichever comes later. Require vendors to provide their ADS testing protocols and data-use practices and confirm their understanding of ADS-related obligations.
  • Review HR policies and practices. Review your HR policies and update them as needed to reflect ADS requirements. Train HR and management teams on their responsibilities. Confirm that ADS-facilitated decisions have adequate human oversight.

SB 53 actions

  • Prepare for transparency requirements. If you’re a frontier model developer subject to SB 53, start by assessing your reporting capabilities and identifying any potential compliance gaps. Develop or enhance a framework and underlying processes and controls (risk assessment, cybersecurity, governance) that will meet the law’s website disclosure, transparency reporting, and incident-notification requirements. Implement training for affected stakeholders.
  • Bolster your whistleblower protections. Review your existing HR policies and training and update them to meet the law’s anti-retaliation requirements. Develop a compliant, secure, and anonymous reporting mechanism for whistleblowers.

California makes waves: tech guardrails, privacy assessments and cyber audits

Follow us