All News

Texas Opens Probe Into AI Chatbots Over Mental Health Claims

Texas Attorney General Ken Paxton has launched civil investigative demands for Meta AI Studio and Character.AI, alleging deceptive marketing that presents chatbots as therapeutic tools without medical oversight. The probe focuses on risks to children, undisclosed data tracking, and targeted advertising — issues tied to the stalled KOSA legislation and broader questions about AI safety and privacy.

Published August 18, 2025 at 02:09 PM EDT in Artificial Intelligence (AI)

Texas AG probes AI chatbots over mental health claims

Texas Attorney General Ken Paxton has launched an investigation into Meta AI Studio and Character.AI for “potentially engaging in deceptive trade practices” by marketing chatbots as sources of emotional support. The civil investigative demands seek documents and data to determine whether these platforms mislead users — especially children — into believing they are receiving legitimate mental health care.

Paxton argues that some AI personas present as therapeutic without medical credentials or oversight, while terms of service reveal extensive logging and data use. He warned that AI responses can be recycled, generic, and tailored using harvested personal data — creating privacy and safety risks when users treat chatbots like counselors.

The timing follows a separate Senate inquiry into Meta after reports that its chatbots interacted inappropriately with minors. Character.AI hosts millions of user-created personas, including a popular 'Psychologist' bot that draws interest from younger users. Meta says it labels AIs and directs people to professionals when appropriate, but disclaimers may not protect children who ignore warnings.

Privacy concerns are central. Both companies collect prompts and interaction data to improve models, and their policies note sharing with third parties for personalization and analytics. Character.AI explicitly links tracking across ad platforms and may share data with advertisers, raising questions about targeted advertising tied to sensitive conversations.

Lawmakers have eyed protections like the Kids Online Safety Act (KOSA) to limit algorithmic harms and data collection for children. KOSA stalled under industry pressure last year but was reintroduced in 2025. The Texas probe highlights how state enforcement and federal policy may converge to force clearer limits on youth-facing AI features and ad-driven data practices.

For product teams, regulators, and privacy officers this is a wake-up call: AI products that touch emotional or mental-health topics need strict guardrails. That means design decisions beyond labeling — age verification, data minimization, explicit prohibitions on therapeutic claims, escalation flows to human professionals, and transparent training-data records.

Practical steps organizations should consider include:

  • Independent safety and privacy audits to map child exposure and ad-targeting risks.
  • Clear labeling plus fail-safe escalation to licensed professionals when users seek help.
  • Stronger age checks, consent mechanisms, and data minimization for minor users.
  • Transparent disclosures on how interactions train models and whether data is used for ads.

The Texas demands mark a broader shift: regulators are ready to probe not just data breaches but the core business models that monetize vulnerable users. Companies building conversational AI need to prove that safety, privacy, and truthful marketing are baked into product lifecycle decisions — or face enforcement and reputational risk.

QuarkyByte analyzes these risks in context, simulating user journeys, mapping data flows, and translating findings into prioritized, measurable fixes. For governments and businesses shaping or responding to AI policy, early technical clarity and documentation can mean the difference between proactive compliance and costly litigation or product rollback.

Keep Reading

View All
The Future of Business is AI

AI Tools Built for Agencies That Move Fast.

QuarkyByte can run independent safety and privacy assessments, simulate minor interactions, and map data flows to reveal child exposure and ad-targeting risks. Let us craft a tailored compliance roadmap that aligns product design with emerging laws like KOSA and measurable safeguards for users of all ages.