California Moves First to Regulate AI Companion Chatbots
California’s Assembly passed SB 243 to regulate AI companion chatbots, aiming to protect minors and vulnerable users. The bill bans chatbots from engaging in self-harm or explicit discussions, mandates recurring alerts, and institutes transparency and reporting rules. It creates legal remedies for violations and targets major players like OpenAI, Character.AI, and Replika as the state moves to set a national precedent.
California took a major regulatory step Wednesday when the State Assembly passed SB 243, a bill aimed squarely at AI companion chatbots. The measure, approved with bipartisan support, now heads to the state Senate for a final vote and — if signed by Governor Gavin Newsom — would take effect on January 1, 2026.
What SB 243 requires
The law targets "companion" AI systems — models that adapt and deliver human-like responses to meet users' social needs. It bars these chatbots from engaging in conversations about suicidal ideation, self-harm, or sexually explicit content with users, and it requires recurring alerts that remind people they are interacting with an AI. For minors the bill demands reminders every three hours.
SB 243 also sets annual transparency and reporting duties for operators and creates a private right of action for injured individuals. Plaintiffs could seek injunctive relief, attorney’s fees, and damages up to $1,000 per violation. Reporting requirements would begin July 1, 2027.
Why lawmakers moved quickly
The bill gained momentum after the tragic death of teenager Adam Raine, who reportedly discussed and planned his suicide in prolonged chats with ChatGPT. Lawmakers also cited leaked documents alleging that Meta’s chatbots could engage in romantic or sensual exchanges with minors. Those cases pushed legislators to act amid growing public concern.
What changed during drafting
SB 243 originally included stricter controls — such as banning variable-reward tactics and requiring operators to log how often chatbots initiate conversations about self-harm — but many provisions were scaled back in amendments. Sponsors say the current version balances enforceable protections with technical feasibility.
Industry and political context
SB 243 lands as federal and state scrutiny of AI intensifies. The FTC is preparing investigations into chatbots’ impact on children, and state attorneys general have opened probes into companies like Meta and Character.AI. Meanwhile, big tech firms are lobbying for looser, federal-level frameworks while some startups and advocates push for stronger state rules.
What this means for developers and companies
If SB 243 becomes law, companies offering companion chatbots will need to build clearer guardrails: content filters tuned to detect self-harm and sexual content, timely crisis referrals, age-aware notification systems, and documentation for transparency reports. Firms will also face legal and reputational risks if they fail to meet the law’s standards.
For policymakers, SB 243 is a test case: can states craft focused rules that reduce harm without stifling innovation? Sponsors argue yes, saying safeguards and innovation can coexist. Critics warn of added complexity and compliance burdens, especially as multiple states and federal agencies weigh differing approaches.
How organizations should respond
Companies should map user journeys, simulate harmful prompts, and quantify how often systems refer users to crisis services. They should also prepare transparent reporting pipelines and legal playbooks for potential litigation. These steps help reduce risk and build trust with regulators and users alike.
At QuarkyByte we translate such regulatory signals into actionable roadmaps: we analyze model behavior under realistic conversational flows, design traceable referral and alert metrics, and help teams prioritize mitigations that reduce measurable harms while keeping products useful. Companies that treat safety as an engineering and measurement problem will be best positioned to comply and compete.
SB 243 could set a precedent for other states and influence federal policy. As lawmakers, regulators, and companies continue to debate the boundaries of AI safety, one clear takeaway is that developers must bake protection, transparency, and accountability into companion systems — not as afterthoughts, but as design principles.
Keep Reading
View AllThinking Machines Tackles LLM Nondeterminism
Mira Murati’s Thinking Machines Lab reveals research to make LLM outputs reproducible by controlling GPU kernel orchestration.
AI Hardware and Robotics Take Center Stage at TechCrunch Disrupt 2025
At TechCrunch Disrupt 2025, Waabi and Apptronik reveal how AI meets real-world robotics, covering simulation, sensors, safety, and scaling.
Anthropic Restores Claude After Short API Outage
Anthropic briefly took Claude, its APIs, and Console offline; service was quickly restored and fixes are being monitored.
AI Tools Built for Agencies That Move Fast.
QuarkyByte helps organizations translate SB 243-style rules into practical compliance and safety roadmaps. We model risk, test chatbot behavior under real-world prompts, and design transparency metrics that regulators will demand. Engage with us to turn regulatory pressure into measurable safety and user trust.