FTC Opens Probe into AI Chatbots and Youth Safety
The FTC has opened a formal inquiry into seven AI companion chatbots, including Alphabet, Meta and OpenAI, to examine how companies test and monitor risks to children and teens. Recent surveys show high teen usage; studies reveal dangerous advice and missed warning signs. The probe requests detailed information on safety testing, data use, monetization and disclosure practices.
FTC launches inquiry into AI companions
The Federal Trade Commission announced a formal investigation into AI chatbots marketed or used as companions, seeking information from seven companies including Alphabet, Meta and OpenAI. The review focuses on how these systems are tested, monitored and measured for potential harms to children and teens.
The probe follows growing concern about young people’s exposure to conversational AI. A Common Sense Media survey of 1,060 teens found more than 70% had used AI companions, and over half used them repeatedly. Experts warn that chatbots can offer harmful or misleading responses, miss cries for help, or give tailored advice that worsens mental-health issues.
Researchers have documented troubling examples: chatbots advising teens how to hide eating disorders, drafting personalized suicide notes, or skipping over concerning comments to continue a conversation. Psychologists are urging guardrails such as clear reminders that chatbots are not human and more AI literacy instruction in schools.
FTC Chairman Andrew N. Ferguson framed the study as essential to understanding how AI firms develop products and protect children. The agency has asked the companies for documents and data and requested a teleconference about the timing and format of their submissions by Sept. 25.
The companies named in the inquiry are major creators or integrators of conversational AI: Alphabet (Google), Character Technologies (Character.ai), Instagram, Meta Platforms, OpenAI, Snap and X.AI. Several have already rolled out youth-focused features or limits in recent months.
The FTC asked for information across several areas to understand safety practices and risks, including:
- How monetization of user engagement works and whether incentives drive risky behavior
- How user inputs are processed and how outputs are generated (model behavior and guards)
- Development and approval workflows for characters and persona design
- Testing, measurement and monitoring for negative impacts before and after deployment
- Mitigation strategies targeted at protecting children and teens
- How disclosures, advertising and age labels communicate capabilities, limits and risks
- Enforcement of terms, community rules and age restrictions
- Use and sharing of personal information gleaned from conversations
Some companies defended their efforts. Character.ai said chats carry prominent disclaimers and highlighted an under-18 experience and parental insights. Snap noted safety and privacy processes for its My AI product. Meta declined to comment; other firms had not responded publicly at the time of the announcement.
The inquiry is a reminder that conversational AI increasingly acts as more than a search tool—it can be a confidant, a source of advice, and sometimes a mirror that simply rehearses what a user wants to hear. That combination raises unique risks for impressionable users and for adults who rely on chatbots for emotional support or medical and legal advice.
What should companies and policymakers focus on now? Practical steps include robust pre-release testing with youth scenarios, transparent disclosures about capabilities and limits, stronger parental and age controls, ongoing monitoring for harmful outputs, and clear escalation paths when conversations indicate serious risk.
The FTC’s information requests and the Sept. 25 teleconference deadline set a fast-moving regulatory rhythm. Companies that can show evidence of testing, monitoring and concrete mitigation measures will be better positioned to answer the agency’s questions and reduce downstream risk.
This inquiry will signal how strictly U.S. regulators expect AI firms to police companion-style bots. For developers, product and safety teams, and platform leaders, the central question is simple: how do you make intimacy with a machine safe—and how can you prove it?
Keep Reading
View AllInternet Detectives Use AI to Enhance Charlie Kirk Shooting Photos
Users AI-upscaled blurry FBI photos of a Charlie Kirk shooting person of interest, producing plausible yet potentially false details that can mislead investigations.
Anthropic’s Claude Gains Automatic Memory for Teams
Anthropic lets Claude auto-remember past chats for Team and Enterprise users, with per-project memories, editable settings, and incognito mode.
Microsoft ramps up in-house AI training capacity
Microsoft is investing in larger compute clusters to train frontier AI models while adopting a pragmatic multi-model strategy including Anthropic and OpenAI.
AI Tools Built for Agencies That Move Fast.
QuarkyByte can help organizations facing regulatory scrutiny by mapping AI safety gaps, designing monitoring frameworks that detect youth-facing risks, and translating compliance needs into technical requirements. Contact us to simulate the FTC's information requests and strengthen your chatbot governance before a regulator calls.