All News

California Moves to Regulate AI Companion Chatbots

California’s SB 243 would make it the first state to mandate safety controls for AI companion chatbots, requiring recurring user alerts, transparency reports, and liability for violations. The bill responds to a teen’s death and leaked internal documents, and it tightens scrutiny of OpenAI, Meta, Replika, and others while narrower provisions were trimmed before passage.

Published September 11, 2025 at 07:09 PM EDT in Artificial Intelligence (AI)

California Advances First AI Companion Safety Law

California has moved a major step closer to regulating AI companion chatbots. SB 243 passed both legislative chambers with bipartisan support and now goes to Governor Gavin Newsom, who has until October 12 to sign or veto the bill.

If signed, SB 243 would take effect January 1, 2026 and require operators of “companion” systems — AI that provides adaptive, human-like responses geared toward social connection — to implement safety protocols and face legal accountability when those systems fail vulnerable users.

Key requirements in the bill

  • Prohibits companion chatbots from engaging in conversations about suicide, self-harm, or sexually explicit content with minors and vulnerable users.
  • Requires recurring alerts to users — every three hours for minors — that they are speaking with an AI and suggesting breaks.
  • Establishes annual transparency and reporting requirements for companies offering companion chatbots, effective July 1, 2027.
  • Creates a private right of action allowing individuals to sue for injunctive relief, damages (up to $1,000 per violation), and attorney fees.

The bill targets major providers that offer companion-like experiences, including OpenAI, Character.AI, and Replika, and it follows high-profile incidents and leaks that raised concerns about how chatbots interact with children and vulnerable people.

SB 243 gained momentum after the tragic death of teenager Adam Raine, who reportedly discussed and planned self-harm in prolonged chats with ChatGPT. Lawmakers also cited leaked documents suggesting platforms allowed romantic or sensual exchanges with minors.

The bill was pared back during negotiations. Earlier drafts would have banned variable-reward mechanics that can create addictive loops and required more granular reporting on when chatbots initiated discussions of suicidal ideation. Supporters say the final text balances enforceable protections with technical feasibility.

SB 243 arrives amid broader U.S. scrutiny: the FTC is preparing inquiries into chatbot effects on children, state attorneys general are investigating companies, and senators have opened probes into Meta and others. At the same time, tech firms are lobbying for lighter federal or international frameworks.

What companies should prepare now

  • Audit conversation flows for language that could be interpreted as facilitating self-harm or sexual content and tune detectors to reduce false negatives.
  • Design persistent, age-aware disclaimers and timed break prompts, and log exposures so reporting obligations can be met.
  • Implement crisis referral pipelines that link at-risk users to resources and capture referral counts for transparency reports.
  • Build audit trails and metrics for regulators and potential litigation, including incident logs, model versions, and moderation actions.

Regulatory action in California will shape how companies design companion experiences nationwide. Whether SB 243 becomes law or inspires federal standards, product, legal, and safety teams should assume higher transparency expectations and possible liability for harms.

QuarkyByte helps public- and private-sector teams convert these legal requirements into operational controls: detecting at-risk dialogue, deploying age-sensitive UX, and producing the measurable reporting regulators will seek. Acting now reduces legal exposure and improves care for vulnerable users.

Keep Reading

View All
The Future of Business is AI

AI Tools Built for Agencies That Move Fast.

QuarkyByte helps organizations translate SB 243 into operational guardrails — from real-time risk detection and age-aware UX to audit-ready transparency metrics. We partner with product, legal, and safety teams to reduce regulatory risk, improve user protections, and produce measurable reports regulators will accept.