The unspoken water consumption of AI systems - learn how to measure your own footprint
In the digital age, artificial intelligence (AI) has become an integral part of our daily lives. From powering chatbots like ChatGPT to driving recommendations on streaming platforms, AI systems are everywhere. But what lies beneath the surface of these innovative technologies? A closer look reveals the environmental footprint they leave, particularly in terms of water consumption.
Google's latest disclosure sheds light on this issue, suggesting that optimizing AI systems with specialized chips, efficient cooling, smart workload management, recycling water, and strategically locating data centers in cooler, wetter regions can significantly reduce water usage.
However, the water usage of AI systems can vary widely depending on the location and time of operation. On-site cooling, often using evaporative cooling towers, is a common practice that removes water from local water supplies. In contrast, wind turbines and solar panels, once built, use almost no water aside from occasional cleaning.
Power plants, which generate the electricity to run these data centers, consume large volumes of water, especially coal, gas, and nuclear power plants for steam cycles and cooling. This means that the water consumption of AI systems includes not only the water used to cool the data center's servers but also the water consumed at the power plants.
Newer approaches, such as immersion cooling and Microsoft's zero-water cooling design, offer promising alternatives to reduce water usage in data centers. Google's Gemini system, for instance, uses about 0.26 milliliters of water per median text prompt, which is roughly the volume of five drops.
When multiplied by millions, the water usage of AI queries adds up, but it is still small compared to other common uses such as lawns, showers, and laundry. However, it's essential to note that the water footprint of a single AI response can be calculated by multiplying the energy per prompt by the water factor (Energy per prompt × Water factor = Water per prompt).
For instance, a medium-length query to GPT-5 results in a water footprint of approximately 39 milliliters, while the same query to GPT-4o yields a water footprint of about 3.5 milliliters. These figures highlight the potential for significant reductions in water usage as AI technologies advance.
Despite these advancements, these solutions are not yet mainstream due to cost, maintenance complexity, and the difficulty of converting existing data centers to new systems. Most operators still rely on evaporative systems.
It's also important to consider the geographical aspect. Water use shifts dramatically with location, with cool, humid areas using minimal water and hot, dry areas consuming large volumes of water. Timing matters too, with data centers in hot, dry areas consuming more water during the summer and at midday during heat waves.
In conclusion, while AI systems do consume water, it's a manageable footprint compared to other uses. By optimizing systems, exploring new cooling technologies, and strategically locating data centers, the water footprint of AI can be significantly reduced. As AI continues to evolve, it's crucial to prioritise sustainability and efficiency to minimise its environmental impact.