Aug 31 2009
Data Center

4 Ways to Go Green in Your Data Center

Here are four hot ideas for keeping your systems cool and your power demand low.

Here are four hot ideas for keeping your systems cool and your power demand low.

Going green in the data center is easier said than done. But a few small process tweaks here and there can translate into big dividends by reducing overall power consumption.

[1] Raise the temperature.

Evidence suggests that hardware runs better when it's cooler. But making a data center too cold can result in a big energy bill.

Working closely with IT equipment manufacturers, the American Society of Heating, Refrigerating and Air-Conditioning Engineers determined that data-center equipment can withstand higher temperatures and wider humidity ranges than previously thought. Five years ago, ASHRAE recommended an environmental range of between 68 and 77 degrees Fahrenheit, with relative humidity between 40 and 55 percent. In 2008, the organization widened the recommended temperature range to between 64 and 81 degrees and the relative humidity range to between 35 and 60 percent.

{mosloadposition mpu}

Most data centers operate at between 65 and 70 degrees Fahrenheit, while some run as low as 60 degrees to guard against emergencies, such as failure of the cooling systems. The strategy is to make the data center as warm as possible without putting equipment at risk of overheating, says Bill Kosik, Hewlett-Packard's energy and sustainability director.

[2] Improve design.

A few design changes can help improve the airflow, thereby reducing your cooling costs. One option is rearranging the perforated floor tiles to implement a hot-aisle/cold-aisle configuration. You can also install energy-efficient lighting and retrofit cooling systems with variable speed motors so they generate less heat and consume less power.

Using contained cabinets that take air from the floor and vent it directly out the top will dramatically improve airflow, Kosik says.

[3] Manage power remotely.

IT organizations can install equipment and sensors to measure everything, from the amount of energy that servers, storage, networking and cooling equipment use to the temperature and humidity in front of server racks and in every corner of the data center. That provides the baseline data IT administrators need to determine how to make their data centers more efficient, which in turn helps save energy and money.

“Without measuring, you have no basis for trying to optimize the data center,” says Herman Chan, manager of Raritan's power management business unit. “If you don't measure, how do you know if you are overcooling or if you have hot spots in certain rows? How do you know if you're just running 10 percent of the nameplate power in any one rack or about to trip a circuit breaker because you are consuming 80 to 90 percent of the load?”

[4] Upgrade power supply systems.

Most new uninterruptible power supplies maintain at least 97 percent efficiency, which means only 3 percent of incoming power leaks out as heat. Older UPS systems operate at 70 to 80 percent efficiency, which means 20 to 30 percent of power is lost.

Buying new modular UPS systems also can save energy. A 500kW system, for example, can be made up of 20 25kW power modules. In the past, IT departments traditionally used two large UPS systems side by side for redundancy, with each UPS operating at 50 percent loads (or less, if the data center anticipated growth).

Say Goodnight

Energy savings from shutting down machines at night could net $15 to $20 per computer annually, according to power management software maker Avocent.

<p>Ryan McVay/Getty Images</p>
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT