All News

Experts Say Google's Gemini Water Claim Is Misleading

Google published a study claiming a median Gemini text prompt uses just 0.26 ml of water and 0.24 Wh of electricity. Researchers say the numbers downplay real impact by omitting indirect water tied to electricity generation, excluding location-based emissions, and failing to share token-level data or peer review. Experts warn this masks broader trends like rising absolute emissions.

Published August 21, 2025 at 09:13 AM EDT in Artificial Intelligence (AI)

Google’s Gemini paper highlights tiny per-prompt impacts — experts say key data is missing

Google released a study that estimates a median Gemini text prompt consumes about five drops of water (0.26 ml), 0.24 watt-hours of electricity, and roughly 0.03 grams of CO2. The tech giant credits recent efficiency gains for the sharp reductions compared with older estimates.

But academics and independent analysts told The Verge the paper leaves out crucial context that makes those numbers feel like the "tip of the iceberg." The main concerns: omission of indirect water tied to electricity production, reliance on a market-based emissions metric without location-based figures, and limited methodological transparency.

Why indirect water matters: much of a data center’s water use is embedded in the electricity that powers it — cooling towers at power plants, hydroelectric reservoir losses, and thermal plant steam cycles. Google’s published water estimate covers direct cooling in its centers but not the water footprint of the grid that supplies them.

Emissions accounting can also mislead when it uses only market-based metrics. Those reflect corporate renewable purchases rather than the actual mix of power on the local grid. Experts argue Google should publish location-based emissions too — the "ground truth" that shows local environmental impact.

Other methodological gaps include using a median prompt without disclosing token counts, prompt length distribution, or how outliers are handled. That prevents independent replication and apples-to-apples comparisons with prior studies that estimated much larger per-prompt water use.

Experts also warn of Jevons paradox: higher efficiency can lower cost-per-use and drive more overall consumption. Google’s own sustainability report shows absolute emissions rose as AI workloads grew, even while per-prompt efficiency improved.

Researchers recommend several fixes to make AI environmental claims trustworthy and actionable:

  • Publish both market-based and location-based emissions using Greenhouse Gas Protocol standards.
  • Report token- or word-level distributions, median and mean values, and how outliers are handled so others can replicate results.
  • Include indirect water linked to electricity generation and disclose region-specific grid mixes.

Google says it’s open to peer review but has not submitted the paper yet. Without independent validation and fuller disclosure, these headline figures risk becoming PR talking points rather than robust benchmarks for policy or procurement.

What this means for developers, IT leaders, and regulators: demand transparent, region-aware metrics when evaluating models and vendors. Efficiency wins are meaningful, but organizations need full-lifecycle assessments to set credible sustainability targets and avoid unintended increases in total impact.

In short, Google’s numbers may be directionally useful, but they don’t yet tell the whole story. Independent peer review, token-level transparency, and inclusion of indirect grid impacts are essential next steps if industry claims are to guide real-world policy and infrastructure choices.

Keep Reading

View All
The Future of Business is AI

AI Tools Built for Agencies That Move Fast.

QuarkyByte can translate model-level claims into full lifecycle impact assessments, including token-level energy, location-based emissions, and indirect water from power generation. We help tech teams and policymakers benchmark real environmental costs and plan mitigation strategies tied to measurable targets.