Data Center Design and its Energy Consumption

Allen Yu
November 26, 2017

Submitted as coursework for PH240, Stanford University, Fall 2017

Introduction

Fig. 1: This is a water-cooled data center in the Port of Strasbourg in France. (Source: Wikimedia Commons).

In this digital world of exponential data growth, the storage of data becomes an increasingly prominent problem. In 2014, U.S. data centers accounted for 70 billion kilowatt hours of electricity consumption, or around 1.8% of the total U.S. consumption. [1] Data centers are one of the major costs for the worlds largest Internet companies such as Facebook, Google, and Amazon, and thus there is a monetary incentive to reduce its energy consumption. There are also environmental concerns regarding the large amount of energy usage, and therefore many scientists are researching designs to optimize energy usage. One metric that is frequently used is the power usage efficiency (PUE), which is defined as the total energy required by the data center divided by the energy needed for the IT equipment. [2]

Water Consumption

Beyond energy, another major element of consumption is water. [2] Data centers consume water in two key processes: electricity generation and cooling. In the first process, an average of 2 gallons of water are consumed per kilowatt hour for thermoelectric and hydroelectric plants in the U.S. [2] (Please note that for simple-cycle gas turbine plants, the water usage for cooling is zero.) In the latter, cooling towers use water evaporation to dissipate heat. It is estimated that an average of 0.5 gallons of water are consumed per kilowatt hour in this step.

Energy Saving Strategies

There are several methods to reduce energy usage. First, data centers can use virtualization, which divides a physical server into many virtual servers. Virtualizing servers use less physical hardware and consequently generate less heat. Second, data centers can consolidate servers, which replaces multiple servers running at low utilization with one that runs at a high utilization. [3] Third, modern server designs have automatic power-off or power-saving capabilities built in them when they are not running heavy loads, which saves power. Fourth, solid-state drives (SSD) consume less energy than hard drive disks (HDD) because they are more closely related to capacity rather than being fixed on a per disk level. SSD usage increased from 8% to 22% from 2012 to 2017. [2] Last but not least, there is a trend of building larger and more efficiently-run data centers called hyperscale data centers, especially by the large technology giants. Larger data centers can better optimize their server usage and have the resources to manage power better.

Future Development

Beyond the trends described above, companies are experimenting with new ways of reducing power consumption. One method is building outdoor data centers that leverage natural cooling. Many new data centers are also using renewable energies such as solar and wind to become more self-sustainable and environmentally friendly (see Fig. 1). [4] However, the intermittent nature of renewable energies makes it important to develop new server management programs specifically built to work with renewable energy sources.

© Allen Yu. The author warrants that the work is the author's own and that Stanford University provided no input other than typesetting and referencing guidelines. The author grants permission to copy, distribute and display this work in unaltered form, with attribution to the author, for noncommercial purposes only. All other rights, including commercial rights, are reserved to the author.

References

[1] N. Brackbill, "The Energy of the Cloud," Physics 240, Stanford University Fall 2016.

[2] A. Shehabi et al., "United States Data Center Energy Usage Report," Lawrence Berkeley National Laboratory LBNL-1005775, June 2016.

[3] K. G. Brill, "Data Center Energy Effeciency and Productivity," The Uptime Institute, 2007.

[4] J. Lee, "Energy Usage of Server Farms," Physics 240, Stanford University Fall 2012.