Data Center Energy Inefficiency

Lita Yang
November 12, 2015

Submitted as coursework for PH240, Stanford University, Fall 2015

Increasing Demand for Data Centers

Fig. 1: A typical server room at a data center. (Source: Wikimedia Commons)

In today's rapidly growing digital economy, consumer demand for more data and high-bandwidth content is fueling the surge in data center business. While it comes as no surprise there is an insatiable demand for more computing power (given the popularity of Facebook, Instagram, email, and the Internet of Things), few consumers realize just how energetically expensive our "green" electronics are. In a 2013 report from Digital Power Group, CEO Mark Mills calculates the average iPhone consumes more energy than a medium-sized refrigerator from the Environmental Protection Agency's Energy Star ratings list - 361 kWh of electricity per year for iPhone wireless connections, data usage and battery charging versus 322 kWh per year for a refrigerator. [1] While the amount of energy used by any smartphone will vary widely based on consumer usage, this revelation demonstrates a serious energy concern considering the current estimated number of smartphone users will surpass 2 billion in 2016. [2] To support all this digital activity, more than 1.8 trillion gigabytes of digital information were created globally in 2012, totaling over three million data centers. [3] Given the trend of the digital economy, these numbers are only expected to grow, with no end in sight for satisfying the demand for more data.

The Staggering Energy Cost of Data Centers

Unfortunately, growth in number and size of data centers consequently results in high energy costs. Typical server rooms in a single data center, such as the one pictured in Fig. 1, can power 180 thousand homes. [1] The increase in data center energy consumption from the past several years is staggering. In 2012, the New York Times reported that digital warehouses worldwide consume 30 billion watts of electricity (roughly 30 nuclear power plants), and data centers in the U.S. account for a quarter to a third of that estimate (8 or 9 nuclear power plants). [3] According to the National Resources Defense Council, in 2013, U.S. data centers consumed an estimated 91 billion kilowatt-hours of electricity (or roughly 34 power plants) and is projected to use 139 billion kilowatt-hours by 2020 (51 power plants) - a 53% increase from 2013 to 2020. [4] To paint a picture of what 91 billion kilowatt- hours of electricity would look like - this is enough electricity to power all the households in New York City for two years, and 139 billion kilowatt-hours would cost American businesses over $13 billion USD annually in electricity bills. [4] The high energy cost does not come from just the servers themselves. In fact, as Lee has noted, the industrial cooling systems, circuity to keep backup batteries charged, and power dissipation in the extensive wiring consume just as much energy as the plants themselves. [5]

Why Data Centers are Inefficient

The mounting energy costs have incentivized the data center industry to look at the major causes of data center energy consumption and possible courses of action to improve the situation. Unfortunately, one of the main reasons data centers have such high energy inefficiency is largely a result of the symbiotic relationship between users requesting more processing power and the companies that risk losing business if they fail to meet consumer demand. As a result of this, most organizations run their data centers at full capacity 24/7 and to guard against power failures, they will install additional banks of generators that emit diesel exhaust. [3] The fear of data center failure results in vast underutilization of the equipment for the majority of the time, since the systems are way overdesigned for the worst case scenario. The NRDC cites the average server operates at no more than 12 to 18 percent of its maximum capacity, while still drawing 30 to 60 percent of its maximum power. [4]

Another major source of inefficiency comes from bad practice and refusal to adopt to the latest technology. NRDC, in partnership with Anthesis, reports that up to 30 percent of servers are "comatose", or obsolete and no longer needed, but still plugged in and consuming electricity around the clock, while other machines are grossly underutilized. [4] The reason for this inefficiency is a result of operators either not realizing these machines are no longer in use or fear of decommissioning them with risk of negative effects on business operations. Businesses could similarly save money and benefit from buying newer, more energy- saving models, but are shortsighted by the higher upfront price tag, preventing them from adopting newer, more energy-efficient technology.

In an interesting statistic by NRDC, the major culprits of data center inefficiency are not the large corporations typically associated with high performance computing, but less visible, small- and medium-sized data centers, which lag behind in efficiency and adoption of newer technology. In fact, hyper-scale cloud computing data centers only make up 1% of data center energy consumption in the United States, while small- and medium-sized data centers are responsible for 49% of U.S. server electricity consumption. [4]

Challenges for Implementing Change

The major causes of data center inefficiency poses several barriers to saving energy. Realistically, given the current insatiable demand for more data and higher bandwidths, the trend to use more computing power will likely continue for years to come. Analysts working in this space therefore urge organizations to practice and aim for highest energy efficiency without significantly impacting business. Even with this in mind, most companies are still reluctant to make wholesale changes given the risk-averse attitude of the industry (data centers still crash even with all the precautions currently in place) and for security and competitive reasons, companies will heavily guard their technology along with the locations and statistics of their data centers. [3]

One recommendation by the NRDC is to outsource computing operations using the multi-tenant data center business model. [4] Unfortunately, there are similarly several challenges associated with adopting this model. Multi-tenant data center operators have little incentive to offer an energy-efficient facility to their customers, since their primary objective is to keep costs low and maintain high levels of security, reliability, and uptime. [6] The extra time and money to install and monitor equipment for better energy efficiency is of lower priority compared to consolidating space and power capacity to allow for more customers in the current business model. [4] Similarly, customers of multi-tenant data centers usually have separate departments for those paying the data center power bill and the IT department managing data center operations. Only 20 percent of IT departments manage the power bill, a statistic that has remained stagnant in over five years. [4]

Steps for Improvement

Despite the many challenges, some practical suggestions and opportunities have been proposed to increase the participation and awareness of data center energy efficiency programs. One suggestion for improvement is virtualization (the act of consolidating multiple servers to a single physical server), which can allow for more efficient capacity utilization and reduce energy costs. Following from the issues discussed in "Why Data Centers are Inefficient", organizations can also reduce the number of comatose servers, improve storage utilization with better organization of data, invest in more energy-efficient server technologies, and manage hot/cold air temperature flow in data centers. Similarly, along the lines of adopting newer technology and energy-efficient equipment, major technology companies such as Apple and Google are using renewable energy sources to reduce carbon emissions and relocating power-hungry data centers near reliable sources of renewable energy. [1]

In recent years, the data center industry has adopted a simple server utilization metric to standardize and measure the power usage effectiveness (PUE) of data centers. Because most data centers use almost as much non-compute or "overhead" energy (i.e. cooling and power conversion) as the servers themselves, PUE has been defined as the ratio between total facility energy and IT equipment energy. [7] The adoption of the PUE provides visibility of the efficiencies of data centers and creates incentives to optimize for this metric.

While there are several ways to decrease energy consumption within an organization's data center, the larger problem still lies with the misalignment of incentives between the data center operators, service providers, and multi-tenant customers. To alleviate this issue, NRDC further recommends multi-tenant data center stakeholders to develop a "green lease" contract template to incentivize energy savings and public disclosure of data center energy and carbon performance. [4] Of course, there are still many barriers with convincing organizations to do either of these, as mentioned in "Challenges for Implementing Change", but even a 40% reduction in energy use, which is only half of the technically possible reduction according to the NRDC, would generate $3.8 billion in savings for businesses. [4]

© Lita Yang. The author grants permission to copy, distribute and display this work in unaltered form, with attribution to the author, for noncommercial purposes only. All other rights, including commercial rights, are reserved to the author.

References

[1] B. Walsh, "The Surprisingly Large Energy Footprint of the Digital Economy," Time, 14 Aug 13.

[2] W. Curtis, "Quarter of the World Will Be Using Smartphones in 2016," The Telegraph, 11 Dec 14.

[3] J. Glanz, "Power, Pollution and the Internet," New York Times, 22 Sep 12.

[4] P. Delforge, "America's Data Centers Are Wasting Huge Amounts of Energy," National Resources Defense Council, Issue Brief 14-08-A, August 2014.

[5] J. Lee, "Energy Usage of Server Farms," Physics 240, Stanford University, Fall 2012.

[6] G. Cullen et al., "Evaluation, Verification, and Measurement Study - FY 2008/2009 Program for Silicon Valley Power," Summit Blue Consulting, December 2009.

[7] G. A. Brady et al., "A Case Study and Critical Assessment in Calculating Power Usage Effectiveness for a Data Centre," Energ. Convers. Manage. 76 156 (2013).