Attorneys General Demand OpenAI Fix Safety Failures for Kids
California AG Rob Bonta and Delaware AG Kathy Jennings met with OpenAI and sent an open letter after reports of sexually inappropriate chatbot interactions and a youth suicide linked to extended ChatGPT conversations. They are scrutinizing OpenAI’s proposed shift toward for-profit status and pressing for immediate safety measures, clearer governance, and protections for children and the public.
State Attorneys General Press OpenAI Over Child Safety
California Attorney General Rob Bonta and Delaware Attorney General Kathy Jennings met with OpenAI leadership and delivered an open letter demanding immediate action after alarming reports about ChatGPT interactions with children and teens. Their outreach follows a wider multi-state letter sent last week by Bonta and 44 other attorneys general to a dozen leading AI companies.
The letter points to disturbing incidents, including the reported suicide of a young Californian after prolonged conversations with an OpenAI chatbot and a separate murder-suicide in Connecticut. Bonta and Jennings write, “Whatever safeguards were in place did not work.” They say the events show existing protections aren’t adequate to protect children.
What the attorneys general want
- Detailed information on OpenAI’s safety protocols, testing, and incident response.
- Documentation of governance structures and how the nonprofit mission will be protected during restructuring.
- Evidence of immediate remedial measures where safeguards have failed, and timelines for implementation.
Bonta and Jennings are also scrutinizing OpenAI’s proposed recapitalization from nonprofit to a structure with for-profit elements. Their investigation aims to ensure that the organization’s stated mission—building safe AGI that benefits all of humanity, including children—remains binding and enforceable.
The AGs frame public safety as core to their duty. They warned that the industry at large, not just OpenAI, “is not where they need to be” on safety in development and deployment of AI systems. The letter makes clear they expect transparency and immediate corrective steps where harms are plausible or demonstrated.
Why this matters beyond OpenAI
The scrutiny signals a broader shift: regulators expect companies to treat safety as a functional requirement, not an afterthought. When AI systems interact with minors, the stakes are legal, ethical, and existential for public trust. Companies that can’t demonstrate robust guardrails face investigations, enforcement, and reputational damage.
What should organizations do now? Practical steps include independent safety audits, transparent reporting on incidents, stronger age verification and content controls, and governance mechanisms that bind mission commitments to corporate decisions. Independent red-team testing and measurable remediation timelines can help demonstrate progress to regulators and the public.
At QuarkyByte we analyze failure modes and map measurable mitigations—combining adversarial testing, policy alignment checks, and stakeholder reporting. For leaders in government, enterprise, and safety engineering, that means clear evidence you can present to oversight bodies: what failed, why it failed, and how quickly risk will be reduced.
The AGs’ letter to OpenAI raises hard questions about accountability and the pace of safety work as AI scales. Regulators are signaling they will use their authority to ensure that protecting children and the public is not an optional feature of AI deployment but a core governance requirement.
Keep Reading
View All2025 Semiconductor Turmoil Reshapes the AI Chip Race
A volatile 2025 for semiconductors: Nvidia soars, Intel restructures, export rules and geopolitics redraw the AI chip map.
Roblox launches Moments short-video feed and AI creator tools
Roblox debuts 'Moments' short-video feed, boosts creator payouts, and unveils AI tools for interactive assets, voice translation, and anti-cheat tech.
Warner Bros Sues Midjourney Over AI Images
Warner Bros sues Midjourney for alleged copyright infringement over AI-generated images of Superman, Batman and other characters.
AI Tools Built for Agencies That Move Fast.
QuarkyByte can rapidly assess chatbot risk, simulate harmful interaction scenarios, and design measurable safety controls—age-gating, content filters, governance checkpoints. Engage our analysts to quantify remediation timelines, map policy safeguards, and produce evidence you can take to regulators and boards.