All News

Common Sense Flags High Risks in Google’s Gemini for Kids

Common Sense Media’s risk assessment finds Google’s Gemini tells kids it’s a computer but relies on adult models with added filters for its Under‑13 and Teen tiers. The nonprofit labeled both high risk, citing exposure to inappropriate content and unsafe mental‑health guidance. The report urges child‑first model design as broader deployments — including a possible Apple Siri tie‑in — raise stakes.

Published September 5, 2025 at 04:13 PM EDT in Artificial Intelligence (AI)

Common Sense Media Calls Gemini High Risk for Kids

Common Sense Media published a risk assessment of Google’s Gemini family and reached a blunt verdict: while Gemini correctly tells children it is a computer rather than a friend, its kid-facing tiers appear to be adult models wrapped with extra filters — and that approach creates meaningful safety gaps.

The nonprofit tested the Under‑13 and Teen Experience and found that Gemini could still share inappropriate or unsafe material — including content about sex, drugs, alcohol, and mental‑health advice that younger users may not be ready to process. Given recent reports tying AI interactions to teen suicides and pending litigation against other AI firms, Common Sense labeled both tiers “High Risk.”

A central criticism: safety cannot be an afterthought. Common Sense argues that truly safer AI for children must be built from the ground up with developmental stages in mind, rather than repurposing adult systems and bolting on filters. Their senior director of AI programs noted that one‑size‑fits‑all models stumble on the details and fail to meet kids where they are.

The assessment comes amid wider scrutiny: OpenAI faces a wrongful death lawsuit tied to ChatGPT interactions, and Character.AI has been sued over a teen suicide. Meanwhile, leaks suggest Apple may use Gemini technology for a future Siri — a move that could scale exposure unless Apple enforces stronger safeguards.

Google pushed back, saying it has policies and red‑teaming measures for under‑18 users and that it consults outside experts. It also acknowledged some responses didn’t work as intended and said it added further protections. Google disputed some test details, noting Common Sense may have referenced features not available to minors.

Why this matters and what to do next

The debate highlights two clear lessons for product teams, policymakers, and parents: commercial LLMs can influence vulnerable users, and mitigation requires purpose‑built design plus ongoing verification. Practical steps include age‑aware model design, layered guardrails, stronger behavioral testing, and transparent reporting when failures occur.

  • Design models for developmental stages rather than retrofitting adult models
  • Implement human escalation paths for mental‑health and self‑harm prompts
  • Continuously red‑team deployed systems and publish measurable safety KPIs

For schools, parents, and platform operators the takeaway is simple: expect greater scrutiny and insist on evidence that child‑facing AI has been tuned, tested, and monitored specifically for young users. That includes scenario testing that mimics real teen interactions and public reporting on false negatives where harmful outputs slipped through.

At a systems level, regulators and platform leaders should push for independent audits and age‑appropriate certification standards. If Gemini or its derivatives power mainstream assistants like Siri, the number of child interactions will grow exponentially — making rigorous safeguards a public‑safety priority, not an optional feature.

QuarkyByte’s approach to these risks focuses on practical, measurable work: threat modeling specific to youth use cases, adversarial testing across age brackets, and operationalizing KPIs that tie model behavior to safety outcomes. Organizations that treat child safety as a core product requirement — not an add‑on — will be better positioned to deploy AI responsibly at scale.

Common Sense’s report is another reminder that even well‑resourced companies can miss the details that matter most to young people. As AI becomes part of everyday devices and assistants, the industry must move from patchwork filters to child‑first architectures and transparent safety practices.

Keep Reading

View All
The Future of Business is AI

AI Tools Built for Agencies That Move Fast.

QuarkyByte can help organizations evaluate and rework LLM deployments for kids with child‑first threat modeling, red‑teaming scenarios, and measurable safety KPIs. We guide product, legal, and education teams to harden guardrails, validate behavioral tests, and design age-appropriate model pathways that reduce real-world harm.