Chip Hebenthal and Chris Bartz oversee a Coast Guard data center that benefited from automated technology tools for nearly 15 years.

Jul 28 2015
Data Center

Agencies Tap Automation Tools for Greater Data Center Efficiencies

The list of available tools is long, but they all work together to ensure power doesn’t go to waste.

The U.S. Coast Guard (USCG) Operations Systems Center (OSC) in West Virginia has added enterprise systems, processing power and data, all without hiring additional staff for maintenance and oversight. How?

For nearly 15 years, USCG invested in automated monitoring, sensing and management tools, allowing officials to better understand the inner workings of the data center and manage power, cooling and other budget-intensive resources more effectively.

The OSC taps powerful data center infrastructure management (DCIM) tools that allow personnel to automatically aggregate disparate data center information and receive a holistic, granular view in an easy-to-monitor dashboard.

The information allows IT teams to more effectively manage how and when servers and cooling units are tapped for duty.

“We’ve instituted a lot of what you’d call ‘smart’ technologies,” says Mike Scott, Core Technologies Division chief for USCG’s OSC, adding that the center runs more than 50 strategic and operational enterprise systems. “We did it because we needed to bend the cost curve down over the years. Otherwise, we wouldn’t have been able to afford what we have today.”

Chip Hebenthal, Data Center Operations and Engineering Division chief for USCG, and Commanding Officer Chris Bartz say that the new tools, combined with other operational changes, save about 25 percent on power and energy costs. The tools also help USCG avoid the cost of additional personnel or new capital equipment investments and provide the IT team with greater peace of mind.

“Without setting foot in any of the spaces, I know exactly what’s going on with all of the systems, all the time,” Hebenthal says. “We can adjust, tweak and fix any problems very, very quickly.”

140 billion

The number of kilowatt hours of electricity that U.S. data centers will consume by 2020, equal to the annual output of 50 power plants

SOURCE: Natural Resources Defense Council, “Data Center Efficiency Assessment: Scaling Up Energy Efficiency Across the Data Center Industry,” August 2014

Getting Granular

Automation is not a new concept in data centers, says Jennifer Koppy, research director for data center trends and strategies for IDC.

All data centers have some form of building management systems, and many have more sophisticated technology to monitor the critical facilities and IT equipment in real time.

Koppy says the reason is simple: Investing in automation increases efficiency, reducing the need for more data centers. Federal agencies are behind the private sector in that practice, but it appears to be picking up.

New monitoring and control systems can provide automated notifications, metrics and power switching. Wired sensors measure temperature, humidity, airflow, floor pressure and other environmental factors to provide more granular operation and insight. A computational fluid dynamics tool simulates a data center’s cooling performance to help teams optimize flow rates and server temperature.

Eye-Opening Insight

William Tschudi, a program manager and head of the High Tech and Industrial Systems Group at Lawrence Berkeley National Laboratory, says his organization provides technical advice to other agencies and the private sector on how to improve data center efficiency. With the support of industry partners, Berkeley Lab developed and is continuously refining a test kit of wireless sensors and other metering technology for hands-on data collection, visualization and demonstrations.

“We go into an agency’s data center and set up sensors and show the data center staff the results on a dashboard so they can visualize what’s happening,” he explains. “It’s eye-opening.”

Tschudi knows from experience: Over the past several years, he and his team worked with data center personnel at Berkeley Lab to implement a series of efficiency measures in its legacy data center used for business and scienctific computing applications. Those measures include the introduction of a wireless monitoring system and wireless sensors, placed strategically on the top, middle and bottom of both the airflow inlet and the outflow sides of a server.

Tschudi says the combination “gives us a fine-grain picture of what the temperatures look like in the data center, as well as the humidity and pressures under the floor.”

The system also transmits power data wirelessly to a central computer for further analysis. The tools allow data center administrators to make a number of key discoveries: More air was circulated than what was required.

Hot spots were observed at certain places in the data center, despite the overall chilly temperature. Unnecessary humidity and a lot of redundancy also proved to be common pain points.

In one instance, the wireless monitoring system alerted administrators that someone placed a carton on a floor tile.

“We weren’t getting enough air cooling in that area, and some of the servers started to overheat,” Tschudi says. “The data center administrators saw what was happening and were able to fix it before it caused any issues.”

82%

The percentage of data center administrators who say that reducing their cooling costs is a strategic priority

SOURCE: IDC, Data Center Survey, December 2014

Finding Efficiency

Such insight and granularity, along with other infrastructure changes such as segregating hot and cold airstreams, allowed Berkeley Lab to increase its IT load by more than 50 percent without requiring additional energy consumption. Administrators also raised the room temperature five degrees, shut down a 15-ton air conditioner and gained a 30 percent improvement in power utilization efficiency (PUE).

That performance is one reason Tschudi and other federal data center experts believe the Department of Energy’s Better Building Program’s Data Center Challenge to improve efficiency 20 percent over 10 years should be attainable “You can realize huge savings just by knowing what’s going on in your data center,” Tschudi says. “What we’ve done at Berkeley Lab with an older data center is a good story of what can be achieved.”

Constant Evolution

Even brand-new data centers can benefit from investments in the latest smart technologies.

The National Oceanographic and Atmospheric Administration’s Environmental Security Computing Center (NESCC) in Fairmont, W.Va., was constructed in 2011 with building automation system controls and sensors.

Darren Smith, the program manager for NESCC, has already made numerous changes to the facility and looks to add even more smart tools.

Administrators have used the facility’s original wired sensors to measure the inlet air temperature of each computing rack and provide feedback to the control system, which determined whether air-handler fan speeds needed to be adjusted, or cool-water valves opened or closed. All of that has an impact on the data center’s operations cost.

Almost immediately, the team realized the original rack doors blocked too much airflow and needed to be replaced.

Smith and his team are now developing plans to deploy more wireless sensors and combine them with DCIM software. The cost of wireless sensors continues to drop, and they are easy to deploy, he says.

“I’ll be able to easily place three sen­­sors on each rack — top, middle and bottom — and get more detailed information on how they’re operating,” Smith says.

“There are a lot of small fans on the inside of the rack that will automatically speed up if the rack gets too hot, which requires more energy,” he says. “If I can use my sensors to connect what’s going on with those racks with the DCIM tools, we can respond more quickly, and more accurately than is possible today, and really deliver the cool air where it’s needed.”

In the meantime, NESCC is already reporting efficiency gains: This past year, the facility saw its PUE drop 25 percent, and monthly heating and cooling costs fell by $37,000, despite adding more supercomputers and electrical systems.

The benefits of smart tools may go well beyond energy savings. Equipment will last longer, and teams will incur less down­­time as potential issues are identified immediately, Smith says.

“I sleep really well at night because I know if anything goes wrong, I’m going to know about it right away. We’ve got an automated system that will send out alarms. We’ll be notified, and we can respond in time to do something about it,” Smith says.

Drake Sorey
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT