As students and administrators seek anytime, anywhere access to the cloud, higher ed IT teams must face their fears and get to work.
For more than 40 years, the IT staff at the Georgia Institute of Technology has worked to keep its data center as current as possible. Over time, it has expanded the center to two rooms of about 5,000 square feet each and has upgraded its equipment to keep pace with administrative and research requirements.
About five years ago, one of the rooms was dedicated to high-performance servers the university needed. They emitted much more energy and heat than the institution’s other servers, says Paul Manno, an HVC systems architect at Georgia Tech — more than the standard computer room air conditioner units could handle.
“As we were looking to stand up the high-performance environment, we realized that we could easily reach 35 kilowatts per rack, and we needed a better way to cool that power density,” Manno explains. “Our raised floor is only about 14 inches, and there is only so much air you can get through that space.”
Georgia Tech addressed this concern by using a combination of hot- and cold-aisle containment systems, which keep hot air exhausted from the servers separate from cool air that is fed into server intakes, maximizing cooling efficiency. To create hot aisles, the data center uses products such as APC’s InRow coolers between the racks, which can be linked together to form a hot aisle. For cold-aisle containment, Georgia Tech uses APC’s Rack Air Containment, along with Rittal’s cold-aisle containment system.
Moving to hot aisle/cold aisle containment is something many organizations are doing to improve efficiency and reduce costs.
“It’s pretty much a given for new build-outs because higher density equipment is so common today,” says Jason Schafer, research manager at Tier1 Research of Bethesda, Md. “Even five years ago, 1 to 2kW per rack was average, but now it can be 10kW per rack or more. That makes hot- and cold-aisle containment pretty important.”
It can be difficult to retrofit existing data centers with hot- or cold-aisle containment, although it’s not impossible. In many cases, it’s worth the effort, Schafer says.
For Georgia Tech, the switch to hot aisle/cold aisle containment has paid off.
“It has allowed us to have the flexibility to add high-performance, high-density equipment to the room and still be able to cool without affecting the temperature of the room,” Manno says. “We don’t have hot spots anymore.”
As the university develops plans for a new data center, Manno says that hot-aisle containment — and maybe cold-aisle containment — will be part of the plans.
For Bryant University, hot-aisle containment is the end game — something to aspire to as the Smithfield, R.I., university expands.
The first step was consolidating three scattered data rooms into one compact, 500-square-foot data center on campus. That step, completed about five years ago, involved standardizing on IBM BladeCenter servers and virtualizing them with VMware. Although down to just 500 square feet, the data center now has about 300 times the capacity because of higher-density computing, says Richard Siedzik, the university’s director of computer and telecommunications services.
When Siedzik’s team built the existing data center, it installed several APC In-Row water-fed cooling units, and has added several more since. Today, it has nine cooling stations that essentially create a hot aisle/cold aisle configuration.
As the university grows, Siedzik expects to enlarge the existing data center.
“Some of the space isn’t built out as a data center yet, but as we expand the data center physically, containment might become a necessity,” he says. “It’s a logical next step.”
Without some type of containment system, heat will inevitably leak into the environment, reducing the efficiency of data center cooling efforts. The idea of containment — whether hot or cold — is to prevent the recirculation of air that has not been cooled, explains Jason Schafer, research manager at Tier1 Research.
With hot-aisle containment, racks are arranged in rows with the backs of the servers facing each other. Exhaust from the servers is emitted into the hot aisle and routed back to the computer room air conditioner (CRAC) unit. This way, hot air emitted from the servers is captured and prevented from entering the rest of the data center.
In a cold-aisle containment configuration, racks are arranged front to front, and the area between the rows is contained. That way, supply and return air are fully separated. Cold air reaches the cold aisles through a raised floor, and the hot air exhausted from the servers is routed from the racks back into the CRAC unit.
Each approach has its pros and cons, but combining them provides the best of both worlds, Schafer says.