Jun 11 2009
Data Center

Get Physical

Make sure your data center's infrastructure helps rather than hinders your agency's service delivery demands.

A data center’s physical infrastructure is the foundation for its success or failure. A solid physical infrastructure assures an effective base on which to build services, while deficiencies will plague even the most diligent data center operators.

Where I work, we recently completed a technology refresh of the data center. The upgrade has let us move more effectively into contemporary technologies, such as blade computing and large-scale virtualization. Physical infrastructure is far less a barrier than we’d previously faced, and the facility is in a much better position to provide flexible and cost-effective services that are in demand.

Here’s are some lessons learned that can help any data center improve its services:

TIP 1: Provide for adequate floor space.

There are multiple factors to consider when it comes to physical space. First, adequate floor space for current and anticipated future demand is essential. To create this estimate, project out your growth using average-case and worst-case figures and identify a target that fits within the maximum space, power and cooling constraints available without building a new facility.

Once you determine how much floor space will be needed, take a look at the space both above and beneath the floor. Often when renovating an existing data center space, you start with a raised floor that might provide as much as 24 inches to work. Unfortunately, at my center, this space was used for both power and data distribution, without much thought given to routing or airflow. The result? The plenum was not distributing air effectively, which sparked heat problems even though we had a large surplus of cooling available.

To clean up the mess under the floor and also prevent this cooling dilemma from recurring, we moved the data cabling overhead. In many instances, doing so will necessitate raising a suspended ceiling by about 2 feet to maintain the minimum height below ceiling obstructions of 8.5 feet required by the Telecommunications Industry Association’s TIA-942 data center standard. Installing a ladder rack and fiber duct below the suspended ceiling creates a visually striking cable plant, one that engineers will be motivated to keep clean and orderly because it’s in plain view throughout the data center.

TIP 2: Be smart about cabling under the floor.

If you keep the cabling under the floor, you will want it to be mostly perpendicular to the airflow and completely away from the cold aisles so that the it won’t interfere with airflow. Because hot aisles do not have any bearing on airflow under the floor, they are a good location for cabling. Although we found that we did not need to install ducts under the floor, doing so would further increase cooling efficiency.

Also following TIA-942, you can use a tiered distribution concept for our equipment. Rather than cabling all data center components directly to core network equipment, you can establish a distribution area in each aisle and cable individual racks to this distribution area. In TIA parlance, this is referred to as a horizontal distribution area and can be thought of as a “wiring closet” for each aisle. This allows changes to be made between the server and the horizontal distribution frame without disturbing unrelated cabling. Minimizing the scope of change minimizes the potential impact as well.

Alongside the cable distribution network (and using the same cable distribution channels), you might want to consider installing a common bonding network tied to your building’s grounding system. This will give each individual enclosure added to the data center environment easy access to a grounding point so that the enclosure can be grounded properly.

TIP 3: Calculate your cooling requirements over five to 10 years.

An interesting note about modern computing equipment: For planning purposes, at least, it is almost completely efficient in converting electrical power to heat. As a result, you can generally assume that every watt of electrical energy that put into a data center will need to be extracted in the form of heat energy. Because 1 kilowatt-hour is roughly equivalent to 3,413 British thermal units, or 0.28 tons of cooling capacity, you can use anticipated electrical loads to drive calculations for your cooling load.

80%  Data centers built before the dot-com era and that are technically obsolete

SOURCE: Gartner

Of course, doing so assumes that you’re using cooling effectively. Our data center provided an excellent counterpoint to this. We started our renovation project with about 60 tons of cooling available to support a 50kW data center. This cooling capacity should have been sufficient to support more than 200kW, but because of poor distribution we still experienced heat-related failures in several areas.

There are remedies: First, consider replacing the air-conditioning system with one that better matches your cooling requirements. Cooling is an area where oversizing can really add up quickly. To avoid over expenditures, we scaled the system to match our five-to-10-year projections and left room in our data center strategy for additional growth. We also calculated our forced-air cooling need based on supporting power densities of up to 8kW per enclosure.

Because almost all of our enclosures had power densities less than this, we matched our forced-air cooling to fit the most common need and plan to install additional capabilities as point solutions where required. For enclosures with higher heat densities, we plan to use a ducted in-row cooling system. By better matching your forced-air cooling strategy to your true needs, you can gain energy savings. Cooling targets, per TIA-942, are 68 to 72 degrees with a relative humidity of 40 percent to 55 percent.

It’s also wise to consider changing the room orientation to match the hot aisle/cold aisle configuration recommended by TIA-942. This arrangement makes best use of cold air by positioning racks so that aisles in the data center alternate between cold (inlet) and hot (exhaust). By arranging racks and using filler panels, a center limits the mixing of hot and cold air and therefore can make the most of the cool air that it generates. Through simple rack orientation shifts and clearing the underfloor plenum, you will be able to reduce inlet and exhaust temperatures by double-digit degrees with no change to the A/C set points.

TIP 4: Protect the data center from fire.

Take precautions that will help your center survive a fire. We implemented a dual-interlock sprinkler system that uses a very early smoke detection apparatus (VESDA) air-sampling system. A VESDA system regularly samples air in a data center looking for signs of a fire. If it detects smoke, the system triggers an alert so that the center’s staff can review and correct the situation before a fire begins.

 

Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT