![]() |
| Fig. 1: Classification of energy-saving cooling technologies for data centers and telecommunications base stations. Image source: X. Huang, after Zhang et al. [2] |
Data centers constitute the physical core of the digital economy, enabling cloud computing, artificial intelligence, and large-scale data storage. It is estimated that, in 2022 globally the data centers electricity consumption was estimated about 240 to 340 TWh/year, roughly 1% to 1.3% of total global demand. [1] Electricity from the grid passes through substations, transformers, and generators before reaching the data center, where nearly all of it becomes heat inside processors and memory. Therefore, cooling for the data centers ensures the optimal operations of the HPCs and data processing.
Cooling modules continuously remove this heat to maintain safe operating temperatures. The overall efficiency is measured by the Power Usage Effectiveness (PUE):
| PUE | = | Ptotal / PIT |
where PIT is IT load and Ptotal includes cooling and other overhead. [2] State-of-the-art facilities report PUE ≈ 1.06, while conventional air-cooled sites operate around 1.3 - 1.5. [2,3] Cooling is a thermodynamic necessity, every joule of computation becomes a joule of heat. The overhead is quantified with equations, which link microscopic energy use per operation to macroscopic facility demand, forming the basis for the following analysis:
| Etotal | = | Ecompute × PUE |
Fig. 1 summarizes major data center cooling technologies, including Free Cooling (air or water side economizers), Liquid Cooling (cold plate or immersion methods), Two-Phase Cooling (heat pipe or thermosiphon), and Thermal Energy Storage (TES) based Cooling systems.
|
||||||||||||||||||||||||
| Table 1: Cooling technologies and performance metrics. |
The ideal lower bound under perfect thermodynamic conditions is PUEmin = Tout/Tin; with Tin ≈ 295°K and hot-weather Tout ≈ 308-318°K, PUEmin ≈ 1.04-1.08, so reported PUE values of ~1.15-1.5 are physically plausible. Higher supply temperatures reduce lift and enable economization, explaining lower overhead versus low-temperature liquid or air cooling.
Table 1 illustrates the current state of arts of cooling technologies and performance metrics. ESR denotes the energy saving ratio relative to a baseline; higher values indicate greater facility energy savings. Cooling systems account for one of the largest non-IT energy demands in data centers as its purpose is to remove processor heat efficiently while maintaining safe inlet temperatures and minimizing electrical overhead. In typical facilities, cooling consumes 25 to 40% of total electricity, though this share can fall below 20% in optimized liquid-cooled designs. [1,2] From the table, it can be noticed that warm water liquid cooling has the lowest PUE at 1.15, which shows that the infrastructure overhead is around 13% of the total energy consumption, and is lower than the other current cooling technologies. However, despite the energy saving, the total cost of ownership of liquid-cooled systems is likely to be higher than air cooling. The initial investment is likely to be higher because of the specially designed equipment for chips.
Global data centers are estimated to consume about 240 - 340 TWh of electricity annually, equivalent to (0.86 - 1.22) × 1018 J. [1] With an liquid cooling system designed PUE of approximately 1.15, about (PUE - 1)/PUE = 0.13 or 13% of the total corresponds to non-IT overhead such as cooling and power conditioning. Therefore, the global electricity used solely for cooling is roughly 0.13 × (240 - 340) TWh = 31 - 44 TWh per year.
In summary, this paper quantified the energy of data-center cooling and compared mainstream technologies using PUE-based metrics. Even in best-in-class deployments, warm‑water liquid cooling achieves PUE ~1.15, yet non‑IT overhead still accounts for roughly 10-15% of total site electricity. At the global scale, using an average PUE of ~1.15, it is estimated that ~40 TWh/year of electricity dedicated to cooling, showing that thermal management remains a first‑order lever for efficiency and decarbonization.
© Xiao Huang. The author warrants that the work is the author's own and that Stanford University provided no input other than typesetting and referencing guidelines. The author grants permission to copy, distribute and display this work in unaltered form, with attribution to the author, for noncommercial purposes only. All other rights, including commercial rights, are reserved to the author.
[1] M. T. Takci et al., "Data Centres as a Source of Flexibility For Power Systems," Energy Rep. 13, 3661 (2025).
[2] Y. Zhang et al., "Cooling Technologies For Data Centres and Telecommunication Base Stations A Comprehensive Review," J. Clean. Prod. 334, 130280 (2022).
[3] A. A. Alkrush et al., "Data Centers Cooling: A Critical Review of Techniques, Challenges, and Energy Saving Solutions," Int. J. Refrig. 160, 246 (2024).
[4] K.-P. Lee and H.-L. Chen, "Analysis of Energy Saving Potential of Air-Side Free Cooling for Data Centers in Worldwide Climate Zones," Energy Build. 64, 103 (2013).
[5] S. Zimmermann et al., "Aquasar: A Hot Water Cooled Data Center With Direct Energy Reuse," Energy 43, 237 (2012).
[6] Z. He et al., "Energy Efficiency Optimization of an Integrated Heat Pipe Cooling System in Data Center Based on Genetic Algorithm," Appl. Therm. Eng. 182, 115800 (2021).