{{item.title}}
{{item.text}}
{{item.text}}
The relationship between state data privacy and artificial intelligence (AI) is becoming increasingly complex as overlapping regulations continue to emerge. Today, laws in more than a dozen states regulate the use of sensitive personal data, transparency around data collection and use, and specific consent requirements — a trend and patchwork of state requirements that may continue given the deregulatory posture of the new administration and Congress at the federal level. Most, if not all, of these laws stand to impact businesses developing and deploying AI. Some even mention AI directly, setting specific guidelines for profiling and automated decision-making (ADM).
AI often relies on collecting and processing vast amounts of data to function, which can raise significant privacy concerns for consumers, especially when it involves personal identifiers. In response, regulators are introducing stricter privacy laws that aim to control how AI systems handle sensitive personal data, including biometric data, health data and children’s data. And with these new requirements, enforcement activity will soon follow.
As these regulations continue to evolve, accountability, explainability and transparency — core principles of privacy law — are shaping AI-specific laws across multiple states. To stay ahead, organizations should begin understanding data privacy laws and how they might affect the way AI collects, uses and stores sensitive data.
As state-level data privacy laws proliferate, regulators are increasingly focused on AI’s role in data processing and ADM. These laws create new compliance challenges for businesses, especially those using AI systems to collect and analyze sensitive data.
| Requirement | States |
| Opt-out (targeted advertising) | CA, CO, CT, DE, FL, IN, KY, MD, MN, MT, NE, NH, NJ, OR, RI, TN, TX, UT, VA, WA |
| Opt-out (sale of personal data) | CA, CO, CT, DE, FL, IA, IN, KY, MD, MN, MT, NE, NH, NJ, OR, RI, TN, TX, UT, VA, WA |
| Privacy policy | CA, CO, CT, DE, FL, IA, IN, KY, MD, MN, MT, NE, NH, NJ, OR, RI, TN, TX, UT, VA |
| Point-of-collection notice | CA |
| Data minimization | CA, CO, CT, DE, FL, IA, IN, KY, MD, MN, MT, NE, NH, NJ, OR, RI, TN, TX, UT, VA, WA |
| Data subject rights — Access | CA, CO, CT, DE, FL, IA, IN, KY, MD, MN, MT, NE, NH, NJ, OR, RI, TN, TX, UT, VA, WA |
| Data subject rights — Delete | CA, CO, CT, DE, FL, IA, IN, KY, MD, MN, MT, NE, NH, NJ, OR, RI, TN, TX, UT, VA, WA |
| Data subject rights — Correct | CA, CO, CT, DE, FL, IN, KY, MD, MN, MT, NE, NH, NJ, OR, RI, TN, TX, VA |
| Data protection and privacy risk assessments | CA, CO, CT, DE, FL, IN, MD, MN, MT, NH, NJ, OR, RI, TN, TX, VA |
| Facial recognition | MD, IL, TX, WA |
| Privacy laws explicit on AI | CA, CO, CT, DE, FL, IN, KY, MD, MN, MT, NE, NH, NJ, OR, RI, TN, TX, VA |
Opt-in and opt-out rights. Many states now require businesses to offer consumers choices regarding how their personal data is processed. States like Colorado, Virginia and Connecticut require explicit opt-in consent for processing sensitive personal data, while others like Utah and Iowa provide opt-out options. California’s CCPA and CPRA also allow consumers to opt-out of profiling and ADM. For businesses using AI, this means implementing mechanisms that give consumers control over how their data is used, particularly in AI-driven processes.
Privacy policies. Transparency is critical in state data privacy laws, with many requiring businesses to provide clear privacy policies that explain how personal data, including data processed by AI, is handled. States like California, Virginia and Colorado mandate that privacy policies include detailed information on AI data processing, ADM and third-party data sharing. Businesses must regularly update these policies to reflect changes in AI practices and comply with evolving laws.
Point-of-collection notice. Some regulations also require transparency about data collection at the point of collection. States like California mandate that businesses clearly inform individuals about what data is being collected and how it will be used. This includes providing accessible, easy-to-understand notices, particularly in the context of AI, where data is often processed for ADM. Businesses should confirm their AI systems clearly communicate these practices to consumers at the point of collection.
Data minimization. Data minimization laws limit the amount of personal data businesses can collect to what is strictly necessary for a specific purpose. States like California, Virginia and Colorado enforce data minimization principles, and Maryland imposes even stricter standards for sensitive data processing. For AI systems, this can be challenging, as they often rely on vast datasets for training. Businesses should verify they only collect the minimum amount of data necessary for AI functions and delete it when no longer needed.
DSR requirements. Data subject rights laws allow individuals to access, correct, delete or restrict their personal data. States like California, Virginia and Colorado grant consumers these rights, with some, like Washington, providing additional protections for health-related data. For businesses using AI, this means building systems that allow consumers to exercise these rights without disrupting AI operations, such as data correction or deletion, which could impact AI model performance.
Data protection and privacy risk assessments. Some states require businesses to conduct privacy risk assessments or data protection impact assessments (DPIAs) for high-risk processing activities, particularly those using AI. California’s CPRA and laws in Colorado and Virginia require businesses to evaluate the risks associated with AI-driven automated decision-making or profiling. These assessments help businesses identify potential privacy risks and implement strategies to mitigate them.
Facial recognition. Facial recognition technology, which is often AI-powered, is also subject to specific regulations in several states. Maryland, Texas and Washington require explicit consent before biometric data, such as facial recognition data, can be collected. AI systems that rely on facial recognition must comply with these consent requirements and confirm that biometric data is securely stored and deleted once it's no longer needed.
Privacy laws explicit on AI. Some states are beginning to specifically address AI within their privacy laws. Texas, for example, restricts AI-driven profiling that significantly impacts individuals and requires businesses to conduct risk assessments. California’s CPRA includes provisions requiring businesses to notify consumers when AI is used in ADM, with additional requirements to come as described in recently proposed rules. For businesses, this means they must comply not only with general data privacy laws but also specific regulations that govern AI’s role in profiling and decision-making.
Enforcement and penalties. Penalties for non-compliance with AI-related privacy laws vary by state. Although most of these laws are enforced solely by the state’s attorney general, Illinois’ Biometric Information Privacy Act allows individuals to sue businesses for improper use of biometric data. Various federal authorities — including the Federal Trade Commission (FTC), Department of Justice (DOJ) and Equal Employment Opportunity Commission (EEOC) — have also taken enforcement actions related to AI, sometimes requiring businesses to delete AI models trained on improperly collected data.
The rise of AI-specific data privacy regulations likely signals that businesses will face increasing scrutiny over how they manage and safeguard personal data. As more states adopt these laws, businesses should develop Responsible AI strategies to achieve compliance. Consider the following steps as you prepare:
Conduct a thorough audit of your AI systems to understand how they collect, process and store personal data. Verify that your systems align with state-specific requirements, particularly around profiling, automated decision-making and sensitive data use.
Make sure your privacy policies reflect how your AI systems use personal data. Be transparent about data collection, use and sharing practices, and confirm that consumers can easily access this information. Regularly update policies to stay compliant with evolving regulations.
Confirm that your AI systems provide clear options for individuals to opt-out of automated decision-making or profiling when required by law. Design mechanisms that make it easier for users to exercise their rights and document consent when necessary.
Implement strong data governance frameworks to manage the lifecycle of personal data used in AI systems. This includes safeguarding sensitive data such as biometric, health or children’s data, and confirming it’s only used for its intended purposes. Regularly review data security practices to safeguard against unauthorized access.
You likely already have controls in place — whether for privacy, cybersecurity or other regulations. By harmonizing these controls into a single baseline, you’ll gain visibility into what’s covered and what remains exposed. AI can help efficiently map new regulatory obligations to your existing baseline and address any gaps in a systematic, prioritized way.
Monitor new developments in AI and data privacy laws, both at the state and federal levels. Engage with your legal and compliance teams to confirm that your organization is aware of and prepared for regulatory changes that could impact how you use AI in the future.
{{item.text}}
{{item.text}}