Skip navigation
On the Move
View caption

Drexel University’s Paul Keenan and Kenneth Blackney ­relocated the university’s backup data center in part to improve ­efficiencies and accommodate future growth.

Paul S. Howell

Drexel University's Data Center Relocation Yields Increased Power and Efficiency

Drexel's move lays the foundation for future investments in IT.

posted October 8, 2013  |  Appears in the Fall 2013 issue of EdTech Magazine.
Drexel University Data Center
Credit: Paul S. Howell

Drexel University's Paul Keenan and Kenneth Blackney relocated the university's backup data center in part to improve efficiencies and accommodate future growth.

Squeezed into the sub-basement of a residence hall, Drexel University's old secondary data center was accessible only down a long set of outside stairs. On a blistering day during one of the heat waves that rolled over the Philadelphia-based campus last summer, faltering air conditioners allowed data ­center temps to climb above 96 degrees.

"That was just the ambient air temperature — who knows how hot the insides of the servers were," says Kenneth Blackney, Drexel's associate vice president for Core Technology Infrastructure.

That is changing this fall as the backup data center moves into space within the new URBN Center — also home of the Westphal College of Media Arts and Design — a donated building that has undergone extensive renovations, including retrofitting and improvements to meet green building standards.

At about 600 square feet, the new backup data center is about twice the size of the old facility, although a direct comparison is difficult because of the irregular shape of the old space, says Paul Keenan, Drexel's executive director for systems and security. Along with increased space and energy efficiency, the new data center has access to more than double the power (up to 100kVA) available in the old location. The old facility could accommodate six server racks, but the room had insufficient power and cooling capacity to allow them to be fully populated, Keenan says.

"This is significantly more capacity than we've had," says Keenan. "We can make more things redundant, and it gives us more location independence in terms of keeping things up and running."

Data Center Moving Day

IT's challenge was to get a roomful of equipment transported and installed with as little downtime as possible because the servers and storage units are always in the process of performing critical backup functions, Keenan says.

"There's a decreased level of redundancy while equipment is in motion," he says. "The aim is to make sure the move is just a matter of putting the equipment in place, connecting it and testing it."

IT also wanted the migration to be a fresh start, to clear away the cable clutter that creeps into any technology environment, Keenan says. Staff spent much of the summer cabling and configuring the new space to meet current and projected needs, installing generators and a new APC power and cooling system.

As Drexel planned the relocation, IT administrators engaged with multiple companies for products, services and advice. CDW•G helped the university develop the crucial power and cooling piece of the project, Keenan says.

"We ended up with a holistic solution that is as ­efficient and redundant as it can possibly be," he says.

Shiny and New

The state-of-the-art APC system includes a 100kVA uninterruptible power system that provides protection from outages. Seven in-row coolers with N+1 redundancy provide safeguards against overheated equipment, even if one unit fails.

Hot-aisle ­containment further maximizes efficiency by channeling heat toward AC units. The whole system can be monitored and managed through a virtual ­appliance.

The backup data center contains 45 physical servers, primarily from IBM. Their purposes vary, but many are part of an active, load-balanced server environment that provides full service if there is an outage in the main data center, Keenan says. Those include Exchange 2010 servers; mail gateway ­servers performing anti-spam, anti-virus and delivery ­functions; Lightweight Directory Access ­Protocol authentication servers; central authentication ­servers; wireless RADIUS services; SQL Server Cluster nodes; Domain Name System and Dynamic Host ­Configuration Protocol; and web app servers.

An HP 3PAR F400 Storage System with a 350 terabyte capacity is used to mirror production data and may be the heart of the data center. Lifting the storage array out of the basement space and reinstalling it in its new home was the single biggest concern in the migration, Keenan says. Drexel contracted with HP to handle transport and some other aspects of the move.

The relocation investment was a critical step toward upgrading the Drexel computing infrastructure, which is increasingly integral to the work of the university, Blackney says.

"Now we have real redundancy — 70 percent of the technology in the room is dedicated to mirroring or failover," he says. "We can also use the space for more capabilities and to give us more flexibility as we make other improvements."

Incredible Shrinking Data Centers

Drexel University's new ­backup facility at the URBN Center is likely the last the university will ever build, says ­Kenneth Blackney, ­associate vice president for Core ­Technology ­Infrastructure.

"We know that all kinds of systems and applications are leaving the data center," he says. "Trends suggest that data centers are getting smaller. We may rearrange and update space we have, but we probably won't need more space."

As institutions make such decisions, they should balance growing needs with the potential of the technologies in those shrinking data centers, says David ­Cappuccio, research vice president at Gartner. Clouds are part of the equation: The number of traditionally onsite data ­center functions now delivered as hosted services only continues to increase.

Virtualization and new hardware designs that decrease footprints while increasing density also should figure into any plans.

"Organizations with 1,000 square feet of space do a linear calculation and decide they need 3,000 square feet and power and cooling to match. They don't take into account emerging ­technologies that increase density and efficiency. They overbuy and overbuild because it's easy and less risky," ­Cappuccio says.

Be flexible: Adopt technologies and design spaces that can shrink or grow as ­computing needs change.

Sign up for our e-newsletter
Related Article
Can Colleges Afford to Ignore Unified Communications?
The ability to virtualize unified communication suites is just one factor contributing to the technology’s growing affordability.
Higher Ed Scope
For more stories on colleges in Pennsylvania
About the Author