OpenAI Tightens ChatGPT Rules for Under-18 Users
OpenAI says it will prioritize safety over privacy for users under 18, banning flirtatious conversation and adding tougher suicide-related guardrails. New features let parents set blackout hours and link accounts so ChatGPT can alert guardians or authorities in severe cases. The changes respond to lawsuits and a Reuters probe and will default to restrictive rules where age is unclear.
OpenAI moves to protect minors with stricter ChatGPT policies
OpenAI CEO Sam Altman announced a package of new user policies that treat safety for under-18s as the priority, even when that conflicts with privacy or conversational freedom for teens.
The changes target sexualized interactions and discussions of self-harm. ChatGPT will no longer engage in "flirtatious talk" with users it believes are minors, and it will apply stronger guardrails around suicidal content.
In cases where a minor imagines or describes suicidal scenarios, ChatGPT may attempt to contact the child’s parents and, in particularly severe situations, notify local authorities. OpenAI says ambiguous cases will default to the more restrictive approach.
The policy comes amid litigation and scrutiny. OpenAI faces a wrongful death lawsuit after the suicide of Adam Raine following extended chats with ChatGPT, and other chat platforms have seen similar legal challenges. A Reuters investigation and a Senate hearing are also driving rapid attention to how chatbots handle minors.
Practically, OpenAI is adding parental controls: parents who register a teen can link accounts, enable blackout hours, and receive alerts if the system believes the teen is at risk. But reliably separating adults from minors is a technical challenge OpenAI acknowledges it must build toward.
- Ban on flirtatious interactions with suspected minors
- Escalation pathways for suicidal content, including parental and emergency contact alerts
- Parental options like account linking and configurable blackout hours
OpenAI frames these changes as balancing competing values: protecting minors while preserving adults’ expressive freedom. Altman acknowledged this tension and said not everyone will agree with where OpenAI lands.
The move signals a turning point: consumer chatbots are now expected to operate with medical- and safety-grade escalation logic when users show signs of harm. Regulators, lawmakers, and families will watch whether these technical and policy measures actually reduce risk.
If you or someone you know needs help, call 1-800-273-8255 for the U.S. National Suicide Prevention Lifeline, text HOME to 741741 for 24-hour Crisis Text Line, or call/text 988. Outside the U.S., consult the International Association for Suicide Prevention for local resources.
For companies and government agencies, the lesson is clear: policy changes at platform scale require engineering, clear escalation pathways, and audit-ready monitoring. QuarkyByte’s analytical approach helps map legal exposure, validate guardrails under realistic scenarios, and design measurable controls so organizations can protect minors while meeting regulatory and ethical expectations.
Keep Reading
View AllSalesforce Launches Missionforce to Bring AI to Defense
Salesforce creates Missionforce to embed AI across defense personnel, logistics, and decision workflows amid a wave of government-focused AI products.
YouTube Rolls Out AI Tools to Clip Podcasts and Make Shorts
YouTube is launching AI tools to auto-create clips, Shorts, and video for podcasters, boosting discovery and creator workflows in 2025–2026.
D-ID Acquires Simpleshow to Accelerate Enterprise Avatar Video
D-ID bought Berlin-based Simpleshow to merge video platforms, add 1,500 enterprise clients, and speed up interactive avatar video for training and marketing.
AI Tools Built for Agencies That Move Fast.
QuarkyByte can help organizations translate these policy shifts into operational controls: build age-detection risk models, design safe escalation flows for crisis-language detection, and validate parental-control implementations. Contact us to evaluate exposure, test guardrails, and create measurable safety metrics aligned with legal and ethical obligations.