All News

OpenAI Tightens ChatGPT Rules for Under-18 Users

OpenAI says it will prioritize safety over privacy for users under 18, banning flirtatious conversation and adding tougher suicide-related guardrails. New features let parents set blackout hours and link accounts so ChatGPT can alert guardians or authorities in severe cases. The changes respond to lawsuits and a Reuters probe and will default to restrictive rules where age is unclear.

Published September 16, 2025 at 01:13 PM EDT in Artificial Intelligence (AI)

OpenAI moves to protect minors with stricter ChatGPT policies

OpenAI CEO Sam Altman announced a package of new user policies that treat safety for under-18s as the priority, even when that conflicts with privacy or conversational freedom for teens.

The changes target sexualized interactions and discussions of self-harm. ChatGPT will no longer engage in "flirtatious talk" with users it believes are minors, and it will apply stronger guardrails around suicidal content.

In cases where a minor imagines or describes suicidal scenarios, ChatGPT may attempt to contact the child’s parents and, in particularly severe situations, notify local authorities. OpenAI says ambiguous cases will default to the more restrictive approach.

The policy comes amid litigation and scrutiny. OpenAI faces a wrongful death lawsuit after the suicide of Adam Raine following extended chats with ChatGPT, and other chat platforms have seen similar legal challenges. A Reuters investigation and a Senate hearing are also driving rapid attention to how chatbots handle minors.

Practically, OpenAI is adding parental controls: parents who register a teen can link accounts, enable blackout hours, and receive alerts if the system believes the teen is at risk. But reliably separating adults from minors is a technical challenge OpenAI acknowledges it must build toward.

  • Ban on flirtatious interactions with suspected minors
  • Escalation pathways for suicidal content, including parental and emergency contact alerts
  • Parental options like account linking and configurable blackout hours

OpenAI frames these changes as balancing competing values: protecting minors while preserving adults’ expressive freedom. Altman acknowledged this tension and said not everyone will agree with where OpenAI lands.

The move signals a turning point: consumer chatbots are now expected to operate with medical- and safety-grade escalation logic when users show signs of harm. Regulators, lawmakers, and families will watch whether these technical and policy measures actually reduce risk.

If you or someone you know needs help, call 1-800-273-8255 for the U.S. National Suicide Prevention Lifeline, text HOME to 741741 for 24-hour Crisis Text Line, or call/text 988. Outside the U.S., consult the International Association for Suicide Prevention for local resources.

For companies and government agencies, the lesson is clear: policy changes at platform scale require engineering, clear escalation pathways, and audit-ready monitoring. QuarkyByte’s analytical approach helps map legal exposure, validate guardrails under realistic scenarios, and design measurable controls so organizations can protect minors while meeting regulatory and ethical expectations.

Keep Reading

View All
The Future of Business is AI

AI Tools Built for Agencies That Move Fast.

QuarkyByte can help organizations translate these policy shifts into operational controls: build age-detection risk models, design safe escalation flows for crisis-language detection, and validate parental-control implementations. Contact us to evaluate exposure, test guardrails, and create measurable safety metrics aligned with legal and ethical obligations.