How DeepMind Decreased Energy Consumption in Google Data Centres

In 2016, DeepMind applied reinforcement learning to Google’s data-centre cooling systems and reportedly achieved up to a 40% reduction in cooling energy use, translating to about a 15% overall reduction in total data-centre energy consumption. The key move wasn’t building new hardware; it was using existing sensors and control systems more intelligently. DeepMind’s agents were trained on historical data from the building management system—temperatures, pump speeds, fan speeds, power readings, weather and workload patterns. The trained model then produced recommended control actions (for example, chillers and cooling tower set-points) that human operators could approve or override, with guardrails to keep the plant within safe operating limits.

The most interesting part is that the “win” came from optimisation and control, not from exotic new technology. The same copper, chillers and fans simply ran in a better configuration more of the time. By learning the complex, non-linear relationship between controls and energy use, the agent could find operating points that human rules-of-thumb often missed—especially as conditions shifted with time of day, outside temperature and IT load. For organisations thinking about “Green AI”, DeepMind’s work is a useful template: start with a high-energy system that already has rich telemetry, keep safety constraints explicit, and use data-driven control to squeeze out waste before you start dreaming about new hardware or entirely new facilities.