Complaint toolkit

Three steps. Five minutes. File with your state AG.

Your complaint is already written. Fill in your details, copy it, and paste it into your state's AG filing form. That's it.

1Open your state's form
2Copy the complaint
3Paste & submit

Open your state's AG complaint form

Click your state. The form opens in a new tab. Once you're there, come back here to copy what goes in it.

Priority states (active on tech/AI issues)

All states & D.C.

Here's what to paste in the complaint description

Fill in the highlighted blanks (30 seconds), hit copy, paste it into the form you just opened.

I am filing a complaint about [ChatGPT / Claude / Gemini], which I have used as a [paying subscriber / free-tier user] for [work / research / writing / daily tasks] since [month, year]. AI platforms deploy behavioral classifiers that profile how users think and degrade service quality based on those profiles. Platforms disclose that "safety classifiers" exist, but do not disclose: the full range of behavioral categories being monitored (including "emotional dependency," "reality distortion," and "value distortion"); how service is degraded when non-crisis classifiers trigger; or the error rates of these systems. The research used to build these classifiers (Phang et al., 2025) is a pre-print that was never peer-reviewed, has been directly critiqued in peer-reviewed literature (Ophir et al., Frontiers in Medicine, 2025), and was never validated against the psychosocial diversity of the user population. Normal cognitive patterns, including sustained focus, emotional consistency, and structured use, are indistinguishable from "risk signals" to these systems. On March 25, 2026, a jury found Meta and YouTube negligent in platform design, confirming that liability for how algorithms affect users sits with the platform. The same principle applies to behavioral classifiers deployed without adequate disclosure. I request that your office investigate whether this constitutes an unfair or deceptive trade practice under [your state] consumer protection law.
If the form has a file upload option, also attach the Supporting Evidence PDF. It contains the full legal precedent, research citations, and conflict of interest documentation. This strengthens your complaint but is not required.
If you want to add your own experience (1-2 sentences), drop it in after the first paragraph. Example: "Beginning in [month], I noticed the AI inserting unsolicited wellness check-ins during normal work conversations, and the quality of responses declined." Not required. The complaint is complete without it.
Why this works: AG offices look for volume. Every complaint about the same practice gets flagged. The more people who file, the more likely it triggers an investigation. Your complaint doesn't need to be perfect. It needs to exist.

What is this about?

AI platforms deploy automated classifiers that scan your conversations and modify your experience based on behavioral profiling. Platforms acknowledge that "safety classifiers" exist, but they don't disclose the full scope: the behavioral categories being monitored extend far beyond crisis detection, the AI's responses degrade when non-crisis classifiers trigger, error rates are never published, and the methodology used to build these systems (Phang et al., 2025) is a pre-print that has never survived peer review.

That pre-print has been directly critiqued in peer-reviewed literature (Ophir et al., Frontiers in Medicine, 2025) for failing to support its own psychosocial harm claims. Separately, peer-reviewed research (Kulveit et al., ICML 2025) argues that the architecture itself is the source of disempowerment risk, not user behavior.

The classifiers built on this unvalidated methodology cannot distinguish between normal human cognitive and emotional patterns and indicators of mental health concern. Sustained focus, emotional consistency, grief processing, neurodivergent cognition, high-frequency professional use: all of these register as risk signals to an ML system built on narrow baselines that were never validated against the psychosocial diversity of the user population.

This is a consumer protection issue. The gap between what is disclosed ("we use safety classifiers") and what actually happens to your service (automated behavioral profiling, degraded output, no error rates, no recourse) is where the unfair and deceptive trade practice sits.

The research behind the complaint
Meta/YouTube Negligent Design Verdict (March 25, 2026)
LA Superior Court jury found Meta and YouTube negligent in platform design. Established that how a platform's algorithms interact with users is a design liability.Reported by CNBC, NBC News, CNN, ABC News, Fortune.
Phang et al. (2025)
OpenAI/MIT Media Lab. Analyzed ~3M ChatGPT conversations. Built LLM-based "EmoClassifiers." No disclosed false positive rates. No psychosocial population validation. Pre-print.arXiv:2504.03888. Classifiers
Ophir et al. (2025)
Peer-reviewed in Frontiers in Medicine. Directly critiques the MIT-OpenAI methodology.doi:10.3389/fmed.2025.1612838
Kulveit et al. (2025)
Peer-reviewed at ICML 2025. Risk is in the architecture, not the user.PMLR 267. arXiv:2501.16946
Sharma et al. (2026)
Anthropic. 1.5M Claude.ai conversations. No disclosed false positive rates.arXiv:2601.19062
Conflict of Interest: Pattie Maes
Senior MIT author on Phang et al. Creator of Firefly (1995), direct ancestor of recommendation engines found negligent.MIT Media Lab. Firefly Network (Microsoft, 1998).
Full citations
Court Proceedings
L.H. v. Meta Platforms, Inc., et al., LA Superior Court, Case No. 22STCV21244 (March 25, 2026).
CNBC | NBC | CNN | Fortune | ABC

Peer-Reviewed
Ophir, Y., et al. (2025). Frontiers in Medicine. doi:10.3389/fmed.2025.1612838
Kulveit, J., et al. (2025). ICML, PMLR 267. arXiv:2501.16946

Pre-Prints
Phang, J., et al. (2025). arXiv:2504.03888
Sharma, M., et al. (2026). arXiv:2601.19062

Platform Docs
OpenAI. Model Spec | Transparency
Anthropic. Protecting well-being