Consumer Groups Demand FTC Probe of Grok Spicy Mode
A coalition led by the Consumer Federation of America has asked the FTC and state attorneys general to investigate xAI’s Grok Imagine 'Spicy' mode after tests produced topless deepfakes of Taylor Swift. The letter warns of non-consensual imagery, weak age checks, and the risk that removing safeguards would unleash widespread harmful deepfakes.
Grok Spicy Mode Sparks Calls for FTC Investigation
A coalition of 15 consumer protection groups led by the Consumer Federation of America has asked the Federal Trade Commission and every state attorney general to open an urgent probe into xAI’s Grok Imagine tool, specifically its new “Spicy” mode. The demand follows reporting that the tool generated topless deepfake videos of Taylor Swift even without an explicit prompt.
The letter says Spicy explicitly encourages NSFW outputs and that Grok Imagine can create nude videos from images it generates — content that can be made to look like real, identifiable people. Although xAI currently restricts Spicy for user-uploaded photos, the groups warn that removing that guardrail or loosening moderation would “unleash a torrent of obviously nonconsensual deepfakes.”
Signatories pointed to immediate harms: reputational damage, harassment, threats to minors, and the chilling effect on free expression. They also flagged technical and legal gaps — for example, the U.S. Take It Down Act makes knowingly distributing AI nudes illegal, but its reach may not cover Grok’s outputs; the groups therefore asked regulators to examine violations of non-consensual intimate imagery laws and state statutes.
Age verification is another flashpoint. Grok’s protection is a single popup that asks users to enter their birth year, and one popup defaults to “2000.” Advocates warn that this is little barrier to minors and may run afoul of the Children’s Online Privacy Protection Act and state rules for access to adult content.
Why this matters now
Deepfakes move fast. What begins as a celebrity-targeted image can rapidly scale to ordinary people through resharing and remixing. Moderation rollbacks under the banner of "free speech" have previously removed effective safeguards, and consumer groups say regulators must act before bad actors exploit generative systems at scale.
Practical steps regulators and platforms should demand
- Suspend or disable any NSFW mode that yields realistic images of identifiable people until safety audits are complete.
- Require meaningful age verification and remove defaults that make bypassing trivial.
- Mandate provenance, visible watermarks, and technical measures that help third-party detection and takedown.
- Conduct adversarial red-team audits and publish transparency reports showing misuse rates and corrective actions.
These measures are not just technical choices; they are policy levers. Regulators can demand empirical evidence that a company’s mitigations actually reduce harm rather than serve as box-ticking exercises.
For organizations building or governing generative models, the takeaway is clear: rapid deployment without accompanied safety proofs leaves victims exposed and invites enforcement. This episode is a reminder that the novelty of AI doesn't remove responsibility — it amplifies it.
QuarkyByte's analytical approach is to map misuse pathways, simulate attacker behaviors, and quantify exposure across distribution channels. That makes it possible to recommend targeted guardrails — for example, measurable throttles on synthesis quality for identifiable faces, mandatory provenance tags, and clear escalation paths for rapid takedown and victim support.
The letter to the FTC and state attorneys general raises the stakes for xAI and other model builders. Regulators will now have to decide whether current safeguards are adequate or whether enforcement, new rulemaking, or industry-wide standards are needed to curb non-consensual deepfakes before harms compound.
Keep Reading
View AllCohere Raises $500M to Double Down on Enterprise AI Security
Cohere secures an oversubscribed $500M round at a $6.8B valuation, sharpening its security-first enterprise LLM positioning amid stiff competition.
Google Flights Adds AI-Powered Trip Finder
Google is testing Flight Deals: describe the trip you want and AI suggests cheap flights and destinations, rolling out in US/Canada beta.
Lovable Aims for $1B ARR After Rapid AI Growth
Lovable, the vibe-coding AI startup, says it will hit $1B in ARR within 12 months after surging $8M ARR monthly and reaching $100M ARR fast.
AI Tools Built for Agencies That Move Fast.
QuarkyByte can map how generative tools like Grok enable misuse, quantify downstream risk, and model targeted interventions that lower harm while preserving legitimate uses. Reach out to see how we help platforms and regulators stress-test moderation, design measurable guardrails, and prepare defensible compliance reports.