Skip to main content
All posts
April 22, 2026·11 min read

Your ChatGPT Conversation Just Used a Bottle of Water. Nobody Wants to Talk About It.

Marcus RodriguezMarcus Rodriguez

Every time you ask ChatGPT a question, somewhere in Iowa a data center drinks water.

Not a metaphor. Not a figure of speech. Microsoft's data centers in West Des Moines consumed 11.5 billion gallons of water in a single year — a 34% spike that coincided directly with the launch of GPT-4. Google's water consumption jumped 20% in the same period. The pattern is the same everywhere: more AI, more water.

A single ChatGPT conversation — roughly 20-50 prompts — uses about 500ml of water. That's a standard water bottle. Gone. Evaporated into the atmosphere to keep a server rack from overheating so you could ask an AI to write your grocery list.

And nobody wants to talk about it.

The numbers nobody puts on the landing page

Let's start with what the AI companies don't volunteer:

Microsoft reported using 6.4 billion liters of water globally in 2023, up 34% from the year before. In 2024, after the full rollout of Copilot and deeper OpenAI integration, that number climbed again. The 2025 and 2026 figures haven't been published yet. The trend line is not subtle.

Google consumed 5.6 billion gallons in 2023 — a 20% year-over-year increase driven by AI workloads. Their data centers in The Dalles, Oregon drew so much water from the Columbia River watershed that local officials started asking uncomfortable questions about agricultural impact.

Meta used 2.68 million cubic meters of water in a single quarter. When they started training Llama models at scale, their water use followed.

These aren't abstract numbers. These are aquifers. Reservoirs. Municipal water supplies. In places that are already dealing with drought.

Why AI needs water in the first place

AI doesn't drink water. But the chips that run AI get extremely hot.

Training a large language model like GPT-5 involves thousands of GPUs running at maximum capacity for weeks or months. Each GPU generates significant heat. Multiply by thousands, pack them into a building, and you have a structure that would melt itself without cooling.

Data centers cool themselves primarily through evaporative cooling — essentially, they sweat. Water absorbs heat from server racks, evaporates, and carries the heat away. The water is consumed. It doesn't come back.

Some facilities use closed-loop cooling with refrigerants, which reduces direct water consumption but increases electricity use (which has its own water footprint at the power plant). Others use "free cooling" in cold climates — running outside air through the facility. But most major data centers, especially in temperate climates, rely heavily on evaporative systems.

The result: every AI query has a water cost, and every AI model that gets trained has a water cost orders of magnitude larger.

Training vs. inference — the hidden multiplier

There are two phases of AI water consumption, and the industry loves to conflate them to minimize the headline number.

Training is the one-time cost of building the model. Training GPT-4 consumed an estimated 700,000 liters of water. GPT-5 likely used more — models get bigger, training runs get longer. But training happens once (or a few times with iterations), so companies love to say "the water cost of training is a one-time investment."

Inference is every single time someone uses the model. Every query, every prompt, every "regenerate response." This is where the real water consumption lives, because inference happens billions of times per day across hundreds of millions of users.

The University of California, Riverside estimated that a single ChatGPT conversation of 20-50 questions consumes approximately 500ml of water. That's just the water for cooling the servers during your session. It doesn't include the water used to generate the electricity that powers those servers.

ChatGPT has over 200 million weekly active users. Even if the average user has just one conversation per week, that's 100 million liters of water. Per week. Just for ChatGPT. Not counting Claude, Gemini, Perplexity, Grok, Midjourney, and every other AI service running on similar infrastructure.

The electricity problem underneath the water problem

Water is the visible cost. Electricity is the invisible one — and it's bigger.

The International Energy Agency projected that data center electricity consumption would double between 2024 and 2026, driven primarily by AI. Global data centers are expected to consume over 1,000 terawatt-hours annually by 2026 — roughly the total electricity consumption of Japan.

That electricity has its own water footprint. Coal and natural gas power plants use water for cooling. Even solar and wind farms have water costs in manufacturing. The full lifecycle water cost of an AI query is significantly higher than just the data center cooling.

Goldman Sachs estimated that a single ChatGPT query uses roughly 10x the electricity of a Google search. An AI-generated image uses even more. An AI-generated video — the kind you can now make with Sora or Veo 3 — uses orders of magnitude more again.

Where this water is coming from

This wouldn't be quite as alarming if data centers were built next to unlimited water supplies. They're not.

Arizona — one of the most water-stressed states in the country — hosts multiple major data center campuses. Microsoft, Google, and Meta all operate facilities in the Phoenix metro area, where water rights are already the subject of legal battles between agriculture, residential development, and industry.

Oregon — Google's The Dalles facility draws from the Columbia River basin, which supports salmon runs, agricultural irrigation, and drinking water for downstream communities. Local residents have raised concerns about the long-term sustainability of giving tech companies priority access to shared water resources.

Iowa — Microsoft's West Des Moines campus uses water from the city's municipal system. The 11.5 billion gallons consumed in one year represented a meaningful percentage of the region's total water supply. Iowa is not a desert, but it's not immune to drought either.

Chile — Google faced significant local opposition when building a data center near Santiago, where water scarcity is already a political crisis.

The pattern is consistent: AI companies build where land is cheap and regulations are permissive, which often means places where water is already under pressure.

What the companies say vs. what they do

Every major AI company has a sustainability report. Every one of them claims to be "water positive" or "working toward water positivity" by some future date.

Microsoft pledged to be "water positive" by 2030 — meaning they'd replenish more water than they consume. In the meantime, their consumption keeps climbing. Being water positive in 2030 doesn't help the aquifer in Iowa in 2026.

Google claims their data centers are among the most efficient in the industry, using 10% less water than the industry average. But 10% more efficient at massive scale is still massive consumption. A fuel-efficient truck is still a truck.

OpenAI — the company behind ChatGPT — doesn't operate its own data centers. They run on Microsoft Azure. So when you ask about OpenAI's water usage, they point to Microsoft. When you ask Microsoft, they give you aggregate numbers that include Xbox Live and Office 365 alongside AI. The accounting is designed to make it impossible to attribute costs to specific products.

This isn't malice. It's incentive structure. These companies have no reason to make the per-query environmental cost transparent, because if users saw "this response cost 25ml of water" next to every ChatGPT answer, the vibes would shift.

The image and video problem

Text generation is the cheap stuff. When you start generating images and video with AI, the computational cost — and therefore the water and energy cost — increases dramatically.

Generating a single AI image requires significantly more GPU computation than generating a text response. A Midjourney image or a DALL-E generation uses an estimated 3-10x the electricity of a text conversation.

AI video generation is the most resource-intensive consumer AI application that exists. Generating a 10-second video clip with Sora or Veo 3 requires sustained GPU computation that dwarfs text and image generation combined.

As AI video generation becomes mainstream — and it's getting there fast — the water and energy footprint of AI will accelerate in ways that current projections don't fully capture.

What can you actually do about it

Here's the uncomfortable truth: you're not going to stop using AI. Neither am I. The productivity gains are real. The tools are genuinely useful. Telling people to stop using ChatGPT to save water is like telling people to stop driving to save oil — technically correct and practically useless.

But you can be smarter about it:

Use efficient models when you can. Not every question needs GPT-5.2 or Claude Opus. Smaller models like GPT-5-mini, Gemini Flash, or Claude Haiku use a fraction of the compute and give perfectly good answers for everyday questions. The water cost scales roughly with model size.

Stop regenerating responses you've already read. Every "regenerate" click is another full inference pass — another 25ml of water for a slightly different version of the same answer. Read it once, work with what you get.

Batch your questions. One conversation with 20 questions uses less total compute than 20 separate conversations with one question each. Context caching means the model doesn't have to re-process everything from scratch each time.

Choose platforms that aggregate efficiently. Using one platform that routes you to the right model for each task is more efficient than running separate sessions across five different AI services. When one platform manages the model selection, it can use lighter models for light tasks instead of running everything through the biggest model available.

Push for transparency. The reason AI companies don't disclose per-query environmental costs is that nobody's asking. If users started asking "how much water did that response cost?" — the way they've started asking about data privacy — the industry would respond.

The math that should make you uncomfortable

Let's do some rough arithmetic:

  • 200 million ChatGPT weekly active users
  • Average 2 conversations per week (conservative)
  • 500ml water per conversation
  • = 200 million liters of water per week, just for ChatGPT
  • = 10.4 billion liters per year

That's roughly the annual water consumption of a city of 100,000 people. For one AI product. Not counting Claude, Gemini, Perplexity, Midjourney, Sora, and every other AI service running inference at scale.

Now factor in that AI usage is growing 50-100% year over year. Factor in that video generation is emerging as a mainstream use case with 100x the compute cost per request. Factor in that every company in every industry is "adding AI" to their products.

The 2026 water footprint of AI is a rounding error compared to what 2030 will look like if nothing changes.

This isn't an argument against AI

Let's be clear: this article isn't "AI is bad, stop using it." AI is one of the most powerful tools humans have ever built. It's making medicine faster, education more accessible, and creative work more democratic. The benefits are real and they matter.

But the environmental cost is also real and it also matters. Right now, that cost is invisible to users, unaccounted for in pricing, and growing exponentially. The AI industry needs to solve this the same way it's solving alignment and safety — as a core engineering challenge, not an afterthought for the sustainability report.

Until they do, the least we can do is be efficient. Use the right model for the job. Don't waste compute on regenerations. And maybe think twice before asking AI to generate a video of a cat wearing sunglasses.

That cat cost a swimming pool.


If you're going to use AI, use it efficiently. LazySusan routes you to the right model for each task — lighter models for simple questions, heavier ones only when you need them. One subscription, 50+ models, less waste. Get started for $8/mo.

Stop juggling AI subscriptions

50+ models including ChatGPT, Claude, Gemini, and more.

Get Started Free