HMD Fuse phone uses on‑device AI to block nude images
HMD’s new Fuse phone embeds SafeToNet’s HarmBlock Plus AI in the camera to stop children filming, sending, or viewing sexual images — even in livestreams and behind VPNs. The on‑device system works offline, is claimed to be tamper‑proof, and pairs with parental controls. It launches on Vodafone UK amid new Online Safety Act rules.
HMD Fuse launches with embedded AI to stop kids sharing nudes
Finnish phone maker HMD has unveiled the Fuse, a smartphone built around a single, urgent goal: prevent children from filming, sending, viewing, or saving sexual images. The device is aimed at parents worried about both accidental exposure and the possibility of kids creating explicit content themselves.
At the technical core is HarmBlock Plus, an AI system developed by SafeToNet and embedded directly into the phone — including the camera. HMD says the model runs offline, was trained on 22 million harmful images, and cannot be removed or bypassed, even when apps use VPNs or livestreams.
The Fuse also ships with parental controls that let guardians supervise and gradually relax restrictions as children mature. HMD positions the device as a safety net and product of the company’s Better Phones Project to make phone ownership safer for younger users.
The phone will be sold exclusively through Vodafone in the UK, priced at £33 per month with a £30 upfront fee. The timing aligns with the UK’s Online Safety Act, which strengthens age verification and responsibilities for preventing minors’ access to harmful content. HMD plans a wider rollout starting with Australia.
HMD and SafeToNet emphasize privacy: the company says HarmBlock Plus doesn’t collect personal data and is designed to operate with no external dependencies. SafeToNet’s founder calls the solution tamper‑proof and effective across every app.
That sell‑sheet raises real technical and ethical questions. On‑device moderation reduces exposure and protects privacy compared with cloud scanning, but claims of being impossible to bypass merit independent verification. Transparency about training data, testing methodology, and failure modes will matter to researchers, regulators, and parents alike.
There are practical tradeoffs: aggressive blocking can generate false positives that interrupt legitimate or educational use, while overly cautious thresholds leave gaps. Equally important are appeal paths and ways to scale controls as children gain autonomy without compromising safety.
- Audit detection accuracy and test against diverse real‑world images
- Design UX that explains blocks and offers escalation or educational guidance
- Map moderation policies to local laws like the Online Safety Act and document compliance
- Monitor false positives, bypass attempts, latency, and user outcomes post‑deployment
For carriers, device makers, and regulators, the Fuse is a live example of how hardware and on‑device AI can be used to meet child protection goals. But implementing this safely across millions of users requires independent testing, clear lifecycle governance for the model, and policies to handle edge cases.
QuarkyByte’s approach is to combine empirical testing with policy mapping: run controlled pilots, measure true and false positive rates, review training provenance, and design user flows that prioritize child safety while preserving legitimate uses. That evidence base helps operators make defensible decisions and report to regulators.
HMD’s Fuse points to a future where devices take a proactive role in protecting minors. The concept is promising, but real‑world effectiveness will hinge on transparency, continuous evaluation, and careful UX and policy design that balance protection with rights and practical use.
Keep Reading
View AllMicrosoft Employees Occupy Redmond Plaza Over Israel Contracts
Around 50 current and former Microsoft staff occupied Redmond’s campus demanding Microsoft end Azure and AI ties to the Israeli military.
Google Docs Now Reads Documents Aloud with Gemini AI
Google Docs gains Gemini-powered text-to-speech with voice and speed controls, aimed at Workspace business, education, and AI Pro users on desktop.
Meta Consolidates AI Under Meta Superintelligence Labs
Meta reorganizes AI into Meta Superintelligence Labs with a new foundation-model group led by Alexandr Wang, shifting strategy to compete with OpenAI and Google.
AI Tools Built for Agencies That Move Fast.
QuarkyByte can help telcos and device makers validate on‑device moderation: run model audits, design transparency and escalation workflows, and map controls to regulations like the Online Safety Act. Ask us to create pilots that measure detection accuracy, false positives, and real‑world user impact.