Sam Altman: Bots Are Making Social Media Feel Fake
Sam Altman said reading Reddit convinced him that bots and human posts written in LLM style now make it hard to tell who’s real. He pointed to LLM quirks, platform incentives, astroturfing fears and engagement optimization as drivers. The shift complicates trust, moderation and any plan to build a ‘bot-free’ social network.
Sam Altman had a striking realization this week: social media now feels indistinguishable from an army of bots. While browsing r/Claudecode and seeing wave after wave of pro‑Codex posts, he found himself assuming many were fake — even though product adoption for Codex is genuinely strong.
Altman then walked through why the line between human and model-written posts is blurring. His thread pointed to multiple forces colliding to create a new, uncanny social conversation.
What’s driving the fake feeling
- Real users picking up LLM phrasing and quirks
- Highly correlated behavior among extremely online communities
- Hype cycles and extremes in sentiment that look scripted
- Platform optimization for engagement plus creator monetization incentives
- Concerns about astroturfing and premeditated campaigns
Altman’s point is both simple and unnerving: large language models were built to mimic human writing, and now humans adopt that style — so the signal is noisy in two directions. He also has skin in the game. OpenAI’s models trained heavily on Reddit data, and Altman once sat on Reddit’s board.
The stakes go beyond meme wars. Imperva reported over half of internet traffic in 2024 was non‑human, and platform owners estimate hundreds of millions of bots on major services. The result: harder moderation, brittle community trust, and lower signal for product decisions.
Real-world consequences
If platforms can’t distinguish authentic voices from bot-driven or LLM‑assisted posts, everything from product launches to public policy discussion is affected. Even a platform built to be "bot‑free" would face echo chambers: experiments show bot communities form cliques too.
So what should leaders do? First, stop treating the problem as purely binary. Detection matters, but so do incentives, provenance, and behavior signals. Combine signal engineering with policy and community design, and test the impact of moderation on real engagement metrics.
QuarkyByte’s approach is to simulate social dynamics, run controlled experiments, and build authenticity scorecards that help product and policy teams measure harm and improvement. That lets platforms iterate toward healthier conversations without guessing.
Altman’s observation is a timely reminder: AI changed how we write, and social platforms must change how they judge what’s real. For engineers, moderators, and regulators, the task now is to blend detection with design so authenticity becomes a measurable, improvable part of platform health.
Keep Reading
View AllGoogle Brings AI Mode to Five New Languages
Google expands AI Mode to Hindi, Indonesian, Japanese, Korean, and Brazilian Portuguese, widening access to its Gemini-powered AI search.
Cognition AI Raises $400M Valued at $10.2B
Cognition AI raised $400M at a $10.2B valuation as Devin ARR soared to $73M; growth, culture and competition are now in focus.
Google Admits the Open Web Is in Rapid Decline
Google told a court the open web is in 'rapid decline,' contradicting its public claims and raising urgent questions for publishers, advertisers, and regulators.
AI Tools Built for Agencies That Move Fast.
QuarkyByte can help product teams, platforms, and regulators quantify how much non-human content affects trust and engagement. We model bot behavior, test moderation rules, and design authenticity metrics so leaders can measure risk, improve signal-to-noise, and make confident platform decisions.