As students and administrators seek anytime, anywhere access to the cloud, higher ed IT teams must face their fears and get to work.
There’s a new currency in data centers these days, and it has nothing to do with dollars and cents, or even with SPECmarks and gigabits per second.
“The growing global concern with energy supplies and greatly increased consumption from countries such as India and China will shortly make ‘joules’ the world’s most important currency,” says Ron Mann, senior director of data center infrastructure at Hewlett-Packard. “Energy costs are in an inescapable upward spiral, and IT expansion and use continue to grow.”
Joules, of course, are internationally recognized units of energy, and, like power utilization effectiveness or PUE ratings, they are on the minds of many IT administrators these days. That’s because of the insatiable appetite that data centers have for energy, not only to power hardware but also, even more significantly from a consumption standpoint, to keep that gear cool.
Fortunately, along with new challenges, there are new options for controlling energy consumption and costs. Server and power-management hardware options now come with monitors and other tools for onboard intelligence, which can make organizations smarter about how they use energy, thus saving money.
PUEs are the main metric for measuring the efficiency of data centers. The PUE rating divides the electricity coming into a building by the overall load that is powering IT operations.
In the world of PUEs, lower is better: A rating of 1 equates to 100 percent efficient. Unfortunately, that’s a number many organizations can only dream about. Many IT facilities run at a collective PUE rate of nearly 2, according to the Uptime Institute.
IT managers are facing a number of challenges in their quest to improve energy efficiency and achieve lower PUE ratings. Changes going on within the facilities are forcing organizations to address some evolving power and cooling realities.
Prior to widespread server virtualization and the quest to consolidate hardware, data center planners often allocated 6 kilowatts or less of power for each server rack. But now that IT managers are eking out higher performance in tighter physical spaces, racks crammed with blade servers may need 20 kilowatts of electricity. This increases power demands and creates ripple effects.
“When you add more servers to the IT environment, you are going to create hot spots that traditional types of cooling methods aren’t going to be able to cool adequately,” says Steve Carlini, global director of data center solution marketing at Schneider Electric, a vendor of data center power and cooling products.
As a result, many IT managers are monitoring their power-consumption habits more closely than ever, and technologies are catching up to help with this high level of scrutiny.
Uninterruptible power supplies (UPS) now routinely include usage monitors that track energy draws. And power distribution units (PDU) come with gauges that show the level of power being directed to individual server racks.
Comprehensive data such as this give IT administrators a clearer picture of their energy requirements. This enables administrators to work with their facilities departments and utility companies to get the power they need to keep operations running continuously. Drilling further into the information, IT managers can also spot potential problem areas, such as a server rack that is approaching a supply threshold that risks costly downtime.
There’s another big advantage to being diligent about monitoring this data: IT organizations may be paying for “captive power,” excess resources that are unnecessarily going to a server rack or other resource that in reality can easily make do with less.
For example, as a consequence of overengineering, a collection of blades may be fed by four 3-kilowatt power lines but, according to the monitors, it never draws more than 6 kilowatts. IT chiefs may decide to move one or more of the lines to a different rack that is in danger of maxing out its power supply. All of the servers would then get the power they need without forcing the organization to contract for additional capacity.
The latest server designs also are contributing to better power management in data centers. Energy-efficient server designs can provide a boost to overall power-saving strategies if IT managers are diligent about hardware refresh cycles.
“Every year’s difference in the age of a server represents about a seven-time decrease in performance per watt,” says Jack Pouchet, director of energy initiatives at Emerson Network Power, a power and cooling equipment company.
“So a server that’s one year old has one-seventh the productivity of a new server,” he says. “Our research shows that 85 to 90 percent of power and cooling is going to devices doing up to 5 percent of all computational work. What that says is, you can shut off those units and let the latest and most efficient equipment handle the workloads.”
New servers also include features such as lights-out management, or LOM, which provides tools embedded into the hardware to monitor the energy draw of servers while they’re running, or even after they’ve been powered down during off hours. Look for gear with the Intelligent Platform Management Interface (IPMI) specification, which, among other insights, can tell IT managers how many volts a server is drawing at any given time.
“Much of the measurement capability that you need is already included in state-of-the-art servers,” Mann points out. “All you have to do is read the sensors.”
The venerable UPS is also becoming more sophisticated in the age of greater energy awareness. “Current UPS generations use multistate inverters and are much more efficient, especially at lower load levels, than they used to be,” Carlini says.
In addition, the inverters give IT managers more control over UPSs that provide double-conversion technology — the ability to convert incoming AC power into filtered DC power and then turn it back into AC power. Power conversion is an important way to assure that high-end data center equipment receives high-quality power.
“But if you really want to save on your electric bill, you could put the UPS in economy mode and get a couple more percent of efficiency out of it,” Carlini adds. Economy mode suspends the conversion process during times when power quality levels are acceptable. While running in this mode, the UPSs still provide their battery backup capabilities and, after a quick reset, will start filtering the power again, if necessary.
Modularity has also come to UPS design in a nod to right-sizing the units and increasing their reliability. For example, vendors offer modules from 2 kilowatts to 25 kilowatts that IT managers can plug into a UPS chassis for additional capacity as needs grow. (Additional modules going up to 200 kilowatts are available but are typically installed by a specialist.) The modules ensure that organizations have the capacity they need but are not powering excess resources. They also provide redundancy and a fast way to replace components if they break down.
Data center managers have another option for getting the right amount of power to each server rack. Modular busway power distribution systems include input connectors for taking in power from UPSs or PDUs. A series of bus plugs then distributes power throughout the rack.
The systems reduce the amount of cabling required to bring power to rack units and make it easier for IT staff to add new equipment without the help of electricians. Busway power distribution can cut installation time and costs by up to 30 percent compared with traditional cable and conduit solutions, according to some industry estimates.