Your ChatGPT Wrapped
Spotify might have started the yearly "wrapped" trend detailing your music listening statistics but now many other services are doing (even Lidl supermarket are doing one for your annual shopping habits!). With usage of AI tools and ChatGPT in particular increasing massively I thought it might be good to have a "wrapped" style tool to help us see how our usage of ChatGPT measures up. So here is "ChatGPT Wrapped":
How are these calculated?
Estimates in this tool are based on publicly available research into large language model inference, data centre electricity use, cooling infrastructure, and agricultural water footprints.
Independent investigations have shown that the environmental footprint of an individual AI query can vary significantly depending on the model used, data centre location, cooling method, and electricity grid. As a result, no single “per-query” value can be considered definitive.
To make these comparisons concrete, this tool uses the following per-query assumptions:
- Energy: ~0.005 kWh per ChatGPT query (5 watt-hours)
- Water: ~0.005 litres per ChatGPT query (5 millilitres)
- CO₂: ~0.002 kg per ChatGPT query (2 grams)
- MIT Technology Review – “We did the math on AI’s energy footprint”
- WIRED – “You’re Thinking About AI and Water All Wrong”
- International Energy Agency – Digitalisation and energy
- Water Footprint Network – Product water footprints
- Sam Altman - “The Gentle Singularity”
- The Verge - “Sam Altman claims an average ChatGPT query uses ‘roughly one fifteenth of a teaspoon’ of water”
Values shown are mid-range averages intended for awareness and comparison, not precise accounting.
You may see much lower water-use figures quoted elsewhere, including estimates from OpenAI that focus on direct, on-site cooling water at highly optimised data centres. This tool uses broader, system-level estimates that also account for indirect water use associated with electricity generation and supporting infrastructure.
In addition, real-world AI usage varies widely. Simple text prompts typically require less compute than image generation, longer conversations, or multi-step analysis. The values shown here are intended to reflect an average across a mix of common usage patterns rather than a best-case scenario.
Why do these numbers vary?
AI systems don’t operate in a single, fixed environment. The resources required to generate a response depend on where and how that response is produced, and on the wider systems that support it. For this reason, estimates of energy, water, and emissions are best understood as ranges rather than exact values. The figures shown here are mid-range estimates intended to provide context, not precise measurement.