OpenAI's AGI Crusade Reshapes the AI Industry
Karen Hao frames OpenAI as the evangelist of a new AI empire built around the promise of AGI. This faith in speed and scale has reshaped research, centralized talent and capital, and produced massive environmental and social costs—often while alternative, lower-impact AI advances go underfunded.
OpenAI’s AGI Crusade and the Rise of an AI Empire
Karen Hao’s portrait of the AI industry in Empire of AI draws a clear, unsettling parallel: empires are driven by an ideology that justifies scale and expansion, even when those costs contradict the stated mission. For today’s AI world, that ideology is the promise of artificial general intelligence — AGI — and OpenAI serves as its chief evangelist.
Hao describes how belief in AGI has reshaped priorities: speed over efficiency, scale over safety, and rapid productization over exploratory science. Researchers, capital, and infrastructure have been pulled into a single gravitational center — one that rewards pumping more data and compute into existing techniques rather than inventing cheaper, safer alternatives.
The financial stakes are enormous and growing. OpenAI hinted at burning $115 billion by 2029; Meta and Google are committing tens of billions to AI infrastructure. That kind of spending concentrates power and shapes the discipline itself — top researchers move to industry, academic inquiry narrows, and corporate agendas steer what counts as progress.
But this rush has clear and present harms. Hao documents worker abuse in data labeling and moderation, environmental strain from massive compute, and societal harms from models that fuel delusions or displace jobs. Meanwhile, the promised utopia of AGI that ‘‘benefits all humanity’’ remains aspirational and poorly defined.
Hao contrasts the AGI religion with examples of targeted, high-impact AI like DeepMind’s AlphaFold — systems that solve concrete scientific problems with far less data, fewer social harms, and measurable benefits in medicine and biology. Her point: progress doesn’t require a winner-take-all, compute-first strategy.
Policy and market choices have amplified the AGI narrative. Ties between Silicon Valley and geopolitics — and headlines about ‘‘racing’’ China — have been used to justify speed. Yet Hao argues this has often led to illiberal outcomes and global concentration of influence rather than the liberalizing effect some promised.
So what should leaders and builders do differently? The debate is not between stopping progress and blind pursuit. It’s about choosing which progress to fund and how to measure success. That means embracing alternatives that prioritize utility, safety, and efficiency.
- Prioritize algorithmic efficiency: invest in methods that reduce data and compute requirements.
- Measure true impact: evaluate environmental costs, worker safety, and downstream social harms alongside productivity gains.
- Support diverse research ecosystems: fund academia and non-profit labs so scientific curiosity isn’t subordinated to corporate roadmaps.
- Enforce governance and transparency: clarify how blended nonprofit/for-profit models measure public benefit and handle conflicts of interest.
The takeaway is pragmatic: AGI as a mission has reorganized incentives and resources. That shift can accelerate breakthroughs, but it also produces concentrated power and collateral damage. Recognizing the pattern is the first step toward course correction.
For policymakers, enterprises, and research leaders, the practical questions are familiar: how do you get useful, high-impact AI without reproducing the harms of an empire built on speed and scale? The answers lie in measured investments, accountable governance, and a renewed emphasis on efficiency and domain-specific systems.
Karen Hao’s critique doesn’t call for halting AI; it calls for recalibrating our faith. We can race toward grand promises or we can choose targeted, safer, and more equitable AI that delivers real-world value today. Which future will we fund?
QuarkyByte translates these debates into operational choices: aligning R&D portfolios with impact metrics, modeling compute and environmental tradeoffs, and designing governance that ties corporate incentives to public benefit. The next phase of AI should be measured, accountable, and useful — not just faster.
Keep Reading
View AllFoundation Models Are Losing Their Monopoly
Pre-training scale is giving way to fine-tuning and interfaces. AI value is shifting to tailored apps, not just giant models.
AirPods Pro 3 Add Heart-Rate Sensor and On-Device AI
Apple's AirPods Pro 3 add an infrared heart-rate sensor, on-device AI for fitness tracking, real-time translation, improved ANC, and water resistance.
California Senate Approves Major AI Safety Bill
California Senate passes SB 53 requiring transparency, whistleblower protections, and a public compute cloud; bill now goes to Gov. Newsom.
AI Tools Built for Agencies That Move Fast.
QuarkyByte helps leaders map the real tradeoffs of AGI-driven strategies—modeling compute and energy costs, auditing datasets and worker risks, and building governance roadmaps. Work with us to design efficient algorithmic paths and accountable deployment plans that reduce harms while preserving innovation.