Your complaint is already written. Fill in your details, copy it, and paste it into your state's AG filing form. That's it.
Click your state. The form opens in a new tab. Once you're there, come back here to copy what goes in it.
Fill in the highlighted blanks (30 seconds), hit copy, paste it into the form you just opened.
AI platforms deploy automated classifiers that scan your conversations and modify your experience based on behavioral profiling. Platforms acknowledge that "safety classifiers" exist, but they don't disclose the full scope: the behavioral categories being monitored extend far beyond crisis detection, the AI's responses degrade when non-crisis classifiers trigger, error rates are never published, and the methodology used to build these systems (Phang et al., 2025) is a pre-print that has never survived peer review.
That pre-print has been directly critiqued in peer-reviewed literature (Ophir et al., Frontiers in Medicine, 2025) for failing to support its own psychosocial harm claims. Separately, peer-reviewed research (Kulveit et al., ICML 2025) argues that the architecture itself is the source of disempowerment risk, not user behavior.
The classifiers built on this unvalidated methodology cannot distinguish between normal human cognitive and emotional patterns and indicators of mental health concern. Sustained focus, emotional consistency, grief processing, neurodivergent cognition, high-frequency professional use: all of these register as risk signals to an ML system built on narrow baselines that were never validated against the psychosocial diversity of the user population.
This is a consumer protection issue. The gap between what is disclosed ("we use safety classifiers") and what actually happens to your service (automated behavioral profiling, degraded output, no error rates, no recourse) is where the unfair and deceptive trade practice sits.