AI and Consumer Data Rights: Where CCPA, State Privacy Laws, and AI Decisions Collide
Table of Contents
TL;DR:
- Consumer data rights (access, deletion, opt-out) don’t disappear when personal data feeds an AI model — but compliance gets exponentially harder.
- California’s new ADMT regulations (effective January 1, 2027) give consumers the right to opt out of automated decision-making and access information about how AI uses their data.
- The FTC has ordered companies to delete entire algorithms built on improperly obtained data at least four times since 2019. Your AI model is not an asset if the training data is tainted — it’s a liability.
Your Consumers Have Rights. Your AI Models Don’t Care.
Twenty states now have comprehensive privacy laws on the books in 2026. Every single one of them grants consumers some version of the same core rights: know what data you collect, delete it on request, and opt out of certain processing. These rights were designed for databases and marketing lists. Now they apply to AI systems that have absorbed consumer data into model weights, training pipelines, and automated decision engines.
The collision between traditional consumer data rights and AI processing is creating compliance problems that most organizations haven’t thought through. When a consumer exercises their right to delete under CCPA Section 1798.105, does that mean you retrain the model? When Virginia’s VCDPA gives consumers the right to opt out of profiling, does that cover your AI-driven credit scoring system? When Colorado’s AI Act (SB 205) demands transparency about training data, what exactly do you disclose?
These aren’t hypothetical questions. Regulators are already answering them — and the answers are expensive.
The Right to Know: What Data Does Your AI Use About Me?
Every major state privacy law — California (CCPA/CPRA), Virginia (VCDPA), Colorado (CPA), Connecticut (CTDPA), and the 16 others now in effect — gives consumers the right to know what personal data a business collects and how it’s used.
For traditional data processing, this is straightforward. For AI, it gets complicated fast.
What Consumers Can Ask
Under these laws, consumers can request:
- Categories of personal data collected — including data used to train, fine-tune, or prompt AI models
- Purpose of processing — which now includes “training AI systems” and “automated decision-making”
- Third parties with whom data is shared — including AI vendors, cloud AI providers, and model training partners
California went further in 2026. SB 361 now requires data brokers to disclose whether personal data is sold to generative AI developers — a direct acknowledgment that AI training is a distinct and regulatorily relevant use of consumer data.
The CCPA ADMT Access Right
On July 24, 2025, the California Privacy Protection Agency (CPPA) adopted final regulations on Automated Decision-Making Technology (ADMT). These rules — effective January 1, 2027 — create a specific right for consumers to access information about how businesses use ADMT, including:
- The logic of the ADMT used in significant decisions
- How ADMT outputs are used to make or influence decisions
- The role of human involvement in the process
This is a step-change. It’s no longer enough to say “we use AI to process applications.” You need to explain how the AI processes them, what logic drives the output, and whether a human actually reviews it.
What This Means for Your Team
| Consumer Right | Traditional Compliance | AI Compliance Challenge |
|---|---|---|
| Right to know categories collected | List data fields | Must include training data, embeddings, feature inputs |
| Right to know purpose | ”Marketing,” “service delivery” | Must specify “AI training,” “automated decisioning” |
| Right to know third-party sharing | List vendors | Must include AI model providers, cloud ML platforms |
| Right to access ADMT logic (CA) | N/A | Must document and explain model logic to consumers |
Action item: Update your privacy notice to explicitly list AI training, automated decision-making, and profiling as processing purposes. If you use consumer data to train or fine-tune models, say so.
The Right to Delete: The Model Retraining Problem
The right to delete is where consumer data rights and AI collide most violently.
Under CCPA, Virginia, Colorado, Connecticut, and essentially every state privacy law, consumers can request deletion of their personal data. Straightforward for a CRM record. A nightmare for a trained AI model.
Why Deletion Is Different for AI
Once consumer data is used to train a model, it becomes embedded in the model’s weights. You can’t simply DELETE FROM model WHERE customer_id = 12345. The data has been mathematically transformed into parameters that influence every prediction the model makes. As one analysis noted, “a business cannot simply remove specific information from a trained model, so a consumer deletion request may require the business to retrain or fine-tune its model without that consumer’s information.”
That means a single deletion request could theoretically require:
- Identifying which models were trained on the consumer’s data
- Removing the data from training datasets
- Retraining the model without that data
- Validating the retrained model still performs acceptably
- Redeploying the new model to production
For organizations running dozens or hundreds of AI models, this is operationally brutal.
The FTC’s Nuclear Option: Algorithmic Disgorgement
If the operational headache isn’t motivation enough, consider what happens when you get the training data wrong in the first place. The FTC has developed a remedy called algorithmic disgorgement — forcing companies to delete not just the data, but the models and algorithms built from that data.
The FTC has deployed this remedy at least four times:
| Case | Year | What Happened |
|---|---|---|
| Cambridge Analytica | 2019 | FTC ordered deletion of all algorithms and models developed using data harvested from millions of Facebook users without consent |
| Everalbum (Paravision) | 2021 | FTC required the company to delete facial recognition algorithms trained on users’ photos that were obtained through deceptive practices |
| WW International (Kurbo) | 2022 | FTC ordered WW to delete algorithms derived from children’s data collected without parental consent, plus a $1.5 million penalty |
| Rite Aid | 2023 | FTC required deletion of all algorithms developed using facial recognition images and banned the company from using facial recognition for five years |
The message is clear: your AI model is only as defensible as the data that built it. If the training data was collected improperly, the entire model — potentially millions of dollars of R&D — can be ordered destroyed.
Practical Approaches to Deletion Compliance
You can’t realistically retrain a model for every individual deletion request. Here’s what defensible programs actually do:
- Machine unlearning techniques — Emerging methods that approximate the effect of retraining without the full computational cost. Still maturing, but worth tracking.
- Batch retraining schedules — Process deletion requests in batches and retrain models on a regular cadence (quarterly, semi-annually) with the purged dataset.
- Data isolation architectures — Separate personally identifiable training data from anonymized/aggregated model inputs. If PII never enters the training pipeline, deletion requests don’t trigger retraining.
- Synthetic data substitution — Replace real consumer data with synthetic data for model training. Gartner projects that by 2027, synthetic data will represent 60% of all AI training datasets across enterprise deployments.
- Document your approach — Whatever method you choose, document it. Regulators want to see you’ve thought through the deletion-AI intersection, not that you’ve solved it perfectly.
The Right to Opt Out: Profiling and Automated Decisions
This is where state laws diverge most significantly — and where AI compliance gets genuinely tricky.
The Profiling Opt-Out Landscape
Most comprehensive state privacy laws include some version of a right to opt out of profiling. But the scope varies dramatically:
| State | Profiling Opt-Out Scope | Key Limitation |
|---|---|---|
| California (CCPA/ADMT) | Opt out of ADMT for “significant decisions” (effective Jan 1, 2027) | Exceptions for fraud prevention, security |
| Virginia (VCDPA) | Opt out of profiling for “decisions that produce legal or similarly significant effects” | Only covers “solely automated” decisions |
| Colorado (CPA) | Opt out of profiling for “decisions that produce legal or similarly significant effects” | 60-day response window |
| Connecticut (CTDPA) | Opt out of profiling for significant decisions | Right to cure expired; AG enforcement active |
| Most other states | Similar to Virginia model | Typically limited to “solely automated” decisions |
The critical distinction: most state laws limit the opt-out right to decisions made solely through automated processing. If a human reviews the AI output before making the final decision, many state opt-out provisions don’t apply. But California’s new ADMT rules are broader — they cover ADMT used to “make or substantially facilitate” significant decisions, which could include AI systems where a human technically approves but routinely rubber-stamps the AI recommendation.
What Counts as a “Significant Decision”?
Most state laws define significant decisions similarly, covering AI used in:
- Credit and lending — AI-driven underwriting, credit scoring, loan pricing
- Employment — AI resume screening, interview assessment, performance evaluation
- Housing — AI tenant screening, rental pricing
- Insurance — AI risk scoring, claims processing, pricing
- Education — AI admissions decisions, financial aid determination
- Healthcare — AI treatment recommendations, coverage decisions
If your AI system influences any of these decision categories, consumer opt-out rights almost certainly apply in at least some jurisdictions.
Colorado SB 205: The Training Data Transparency Requirement
Colorado’s AI Act, effective June 30, 2026, adds another layer. AI developers must provide deployers with “high-level summaries of the type of data used to train” their AI systems, along with documentation of known limitations and reasonably foreseeable misuses. This means if you’re deploying a vendor’s AI system for consequential decisions, you need training data documentation from that vendor — and “we don’t disclose that” isn’t a compliant answer.
The Compliance Map: Rights Across 20 State Laws
Here’s how the major consumer data rights apply to AI processing across key state privacy law categories:
| Consumer Right | States That Grant It | AI-Specific Wrinkle |
|---|---|---|
| Right to know/access | All 20 states with comprehensive laws | Must cover AI training data usage, not just stored data |
| Right to delete | All 20 states | May require model retraining or machine unlearning |
| Right to opt out of sale | All 20 states | Selling data to AI training providers likely qualifies |
| Right to opt out of profiling | ~15 states (varies in scope) | “Solely automated” limitation may exclude human-in-the-loop AI |
| Right to opt out of ADMT | California (Jan 2027) | Broadest scope — covers AI that “substantially facilitates” decisions |
| Right to data portability | Most states | Must include AI-derived inferences in some jurisdictions |
| Right to correct | Most states | Correcting training data ≠ correcting the model’s learned patterns |
So What? Building a Consumer Rights Program That Survives AI
The convergence of 20 state privacy laws and accelerating AI adoption means the compliance surface area is expanding faster than most teams can map it. Here’s the 30/60/90-day plan:
Days 1-30: Assessment
- Inventory every AI system that processes consumer personal data
- Map which state privacy laws apply based on consumer location
- Identify which AI systems make or influence “significant decisions”
- Audit current privacy notices for AI-specific disclosures
Days 31-60: Infrastructure
- Implement tracking for which consumer data enters AI training pipelines
- Establish a batch retraining process for handling deletion requests
- Build an ADMT opt-out mechanism ahead of California’s January 2027 deadline
- Update data subject access request (DSAR) workflows to include AI-specific responses
Days 61-90: Documentation and Training
- Document your deletion-and-AI approach for regulator inquiries
- Train customer service and privacy teams on AI-specific consumer requests
- Prepare Colorado SB 205 training data disclosures (due June 30, 2026)
- Conduct a tabletop exercise: “Consumer requests deletion of data used to train our credit model”
The organizations that treat consumer data rights and AI governance as separate workstreams are building compliance debt. The laws are converging. Your program should too.
Need a framework for mapping consumer data rights to your AI systems? The Data Privacy Compliance Kit includes privacy impact assessment templates, state law comparison matrices, and consumer rights response workflows designed for AI-era compliance.
FAQ
Do consumers have the right to opt out of AI making decisions about them?
In most states with comprehensive privacy laws, consumers can opt out of profiling that produces “legal or similarly significant effects” — but only when decisions are made solely through automated processing. California’s new ADMT regulations, effective January 1, 2027, go further by covering AI systems that “substantially facilitate” significant decisions, even when a human is nominally involved. If your AI system influences credit, employment, housing, insurance, or healthcare decisions, opt-out rights likely apply in at least some jurisdictions.
What happens when a consumer requests deletion of data that was used to train an AI model?
This is one of the hardest compliance problems in AI governance. Once data is embedded in model weights through training, it can’t be surgically removed. Organizations typically address this through batch retraining schedules (purging data and periodically retraining), data isolation architectures that prevent PII from entering training pipelines, or synthetic data substitution. The FTC has ordered companies like Everalbum and WW International to delete entire algorithms built on improperly obtained data — making proper data governance before training critical.
How many states have AI-related consumer privacy rights in 2026?
Twenty states now have comprehensive privacy laws in effect as of 2026, with Indiana, Kentucky, and Rhode Island joining in January. All grant core rights (access, deletion, opt-out of sale) that apply to AI-processed data. Approximately 15 states include some form of profiling opt-out right. California’s ADMT regulations (effective January 2027) and Colorado’s AI Act (effective June 2026) add the most AI-specific consumer protections, including rights to access information about automated decision logic and requirements for training data transparency.
Rebecca Leung
Rebecca Leung has 8+ years of risk and compliance experience across first and second line roles at commercial banks, asset managers, and fintechs. Former management consultant advising financial institutions on risk strategy. Founder of RiskTemplates.
Keep Reading
AI Training Data Governance: Managing Data Quality, Consent, and Provenance
How to build an AI training data governance program that covers data quality, consent, provenance tracking, and regulatory compliance for financial services.
Apr 2, 2026
Data PrivacyPII in AI Systems: How to Handle Personal Data When Using LLMs
Practical guide to detecting, protecting, and managing PII in LLM systems — covering GLBA, CCPA, de-identification, and vendor contract requirements.
Apr 2, 2026
Data PrivacyAI Data Leakage Prevention: A Practitioner's Guide to Protecting Sensitive Data in LLM Systems
Learn how to prevent AI data leakage from LLMs in financial services. Covers the 5 leakage vectors, OWASP LLM top risks, NIST controls, and a 90-day implementation roadmap.
Mar 27, 2026
Immaterial Findings ✉️
Weekly newsletter
Sharp risk & compliance insights practitioners actually read. Enforcement actions, regulatory shifts, and practical frameworks — no fluff, no filler.
Join practitioners from banks, fintechs, and asset managers. Delivered weekly.