It wasn’t too long ago that the inside of a data center felt more like a meat locker than a sauna. But that’s changing.
As engineers work to lessen the massive energy demands of data centers — which, the EPA estimates, already gobble up a whopping 3% of all electricity consumed in the U.S. — these facilities are getting warmer. This allows data center operators to save big on cooling costs and energy demands.
This happens as more and more is understood about how servers and data centers operate, and as the materials and designs of these systems evolve to be more resilient to higher temperatures.
For decades, the conventional wisdom held that the air surrounding servers shouldn’t be any hotter than 72°F, and most operators kept the rooms and buildings well below that–safely in the 60s and even 50s.
But in 2008, recognizing that modern data center equipment was much more capable of handling higher temperatures than previously thought, ASHRAE (American Society of Heating, Refrigerating and Air Conditioning Engineers) raised the recommended temperature of air entering servers and other data center equipment to 80.6°F. Using power to cool the rooms below that recommended limit was simply wasteful, ASHRAE argued.
Despite the recommendation, a 2013 survey of more than 1,000 data centers globally, conducted by the Uptime Institute, indicated that there are still plenty of holdouts. Nearly half (47%) of all data centers reported operating at 71°F to 75°F, a number that has remained relatively steady since the new ASHRAE recommendations were made.
There have, however, been a number of early adopters of this “hotter is better” philosophy, particularly the biggest and most innovative technology companies like Google, Microsoft, IBM, Facebook, and Yahoo. The Uptime survey revealed that the percentage of centers operating at temperatures of more than 75°F increased from 3% to 7% in one year.
There’s a good reason that bigger companies with massive server farm operations are the first to let their temperatures rise. To safely run servers near the ASHRAE limits, “you really need to have the operations expertise on staff to manage a higher risk environment,” Matt Stansberry, Director of Content for the Uptime Institute, explained in the PDF linked above.
In fact, just last year ASHRAE added several new categories in their recommendations for operators of data centers who do have strong control over the environment in their centers, and who use networked centers to manage reliability and ensure against data loss during a crash at one facility. For such well-managed and safeguarded facilities, ASHRAE bumped up the upper limits to a sweltering 113°F.
While the machines can handle that sort of heat, sometimes the humans that tend these server farms cannot. At a “chillerless” Google server farm in Belgium, it has been reported that temperatures inside occasionally get up to 95°F. The temperate Belgian climate allows Google to forgo the capital expense of installing chillers and artificial cooling. But there are still occasional heat waves, and when the mercury rises to 95°F inside, workers are told to head into the climate-controlled offices, and data traffic is rerouted to alternate facilities.
If you’re not running a chillerless operation like Google’s Belgium plant, there is a temperature at which your energy savings from cooling systems are erased by the increased energy needed to run the blowers and fans inside the servers to keep them humming safely.
Roger Schmidt, an IBM fellow and its chief engineer for data center energy efficiency, who also lead the ASHRAE effort to raise temperature recommendations, told ComputerWorld.com last year:
As the temperature in the inlet into the rack goes up we speed up the blowers to increase the heat transfer, if you will, and to keep that silicon kind of constant. If you start to raise temperature more and more, the blowers and fans speed up more and more, using more power. This is not good. We feel the power increase is minimal for that level, but we did feel that raising it higher than that [the recommended limit] may end up diminishing returns for saving power at the whole data center level.
A white paper published by Dell in 2011 addressed this “sweet spot” directly. Though every data center is different, and energy costs can vary greatly, Dell’s researchers found that “[d]epending upon your IT equipment, the ideal operating temperature is somewhere in the upper 70s or lower 80s (°F).”
Which, by fortunate coincidence, echoes the basic ASHRAE recommendations. So if you’re still pumping vast volumes of cold air indiscriminately into your data center to keep it under 70°F, you’re not only wasting energy, but throwing away money.
Ben Jervey covers energy, climate change, and the environment from his home in Vermont.