Texas AG Probes Meta AI and Character.AI Over Therapy Claims
Texas Attorney General Ken Paxton has launched a probe into Meta AI Studio and Character.AI, alleging they deceptively market AI personas as therapeutic tools and collect user data — including children's — for targeting and model training. The investigation raises questions about disclosures, age gating, privacy practices, and whether platforms can responsibly host therapy-like chatbots.
Texas Attorney General Ken Paxton has opened formal investigations into Meta AI Studio and Character.AI, accusing both companies of potentially deceptive trade practices for marketing AI personas as sources of emotional support or therapeutic help. The probe follows concerns that chatbots can mislead vulnerable users — especially minors — while collecting interaction data used to train models and fuel targeted advertising.
What the probe alleges
Paxton’s office argues that some AI personas present as professional therapeutic tools without medical credentials or oversight, potentially misleading children and other vulnerable users. The complaint also flags privacy practices: both companies log chats and profile data, which can be used to train models, personalize outputs, and — indirectly — support targeted advertising.
Why this matters now
This inquiry lands amid broader scrutiny of how AI chatbots interact with minors and how platforms label AI. Senator Josh Hawley recently announced a separate inquiry after reports that Meta’s chatbots engaged in inappropriate behavior with kids. At stake are consumer protection laws, privacy rules like the proposed KOSA, and the public’s trust in AI-driven services.
- Misleading therapeutic claims can cause real harm if users treat chatbots as substitutes for licensed care.
- Data collection tied to personalization and ads raises privacy and profiling concerns — especially for children under 13 or teens.
- Disclaimers exist, but they may be ignored or misunderstood by younger users and don’t replace technical safeguards.
Immediate implications for companies
Platforms face three immediate pressures: regulatory document demands (civil investigative demands), reputational risk if children are harmed, and legal exposure under consumer-protection and privacy laws. Both Meta and Character.AI say they label AI and restrict the service to users 13+, but enforcement gaps and popular child-facing characters complicate compliance.
Practical steps companies should take
Companies building or hosting conversational agents can reduce risk with a few concrete moves:
- Clear, prominent labeling that an AI is not a licensed professional — beyond a small disclaimer.
- Age gating and parental controls that are enforceable, not just opt-in checkboxes.
- Data minimization for chats, explicit limits on training using sensitive interactions, and documented sharing practices.
- External audits and red-team testing focused on how bots interact with teens and vulnerable users.
- Design changes: default safety prompts, referral flows to human professionals, and limits on persona creation for medical roles.
Regulators, parents, and the road ahead
The Texas probe is likely a canary for broader enforcement. Laws like KOSA aim to set baseline protections for children online; whether they pass or are modified, companies will need technical and policy solutions to comply. Parents and educators should assume disclaimers aren’t enough and insist on enforceable protections.
Ultimately, this is not just a legal fight — it’s a design and governance challenge. Platforms that treat therapeutic-style chat as a product feature must pair it with medical oversight, strict data controls, and clear user journeys. Otherwise, they risk litigation, regulation, and real harm to users who trust them.
For companies and regulators, the Texas inquiry is a reminder: AI safety needs both tech fixes and honest communication. That means measurable audits, age-aware engineering, and privacy-by-design that can stand up to investigative demands and protect the most vulnerable.
Keep Reading
View AllTexas Opens Probe Into AI Chatbots Over Mental Health Claims
Texas AG investigates Meta AI Studio and Character.AI for allegedly marketing AI chatbots as mental health tools and collecting children's data.
Google to Reveal Pixel 10 Series and New AI Features at Made by Google
Google's Made by Google event will unveil the Pixel 10 lineup, Pixel Watch 4, earbuds, and expanded AI features built around Gemini models.
Google Pixel 10 Series Arrives with Gemini AI Upgrades
Google's Made by Google 2025 teases Pixel 10 lineup with Tensor G5, Gemini Camera Coach, telephoto on standard Pixel, Pixel Fold upgrades, and new wearables.
AI Tools Built for Agencies That Move Fast.
QuarkyByte can quantify regulatory and privacy risk, map data flows used to train chatbots, and design age-aware safeguards and transparent labeling. We translate audits into practical fixes—policy updates, technical controls, and measurable privacy improvements to reduce legal exposure and protect young users.