Jul 31 2008

21st-Century Centralization

It wasn’t so long ago that the mainframe ruled IT. Centralized computing was the standard because organizations needed more processing power than smaller systems could provide. That has been especially true for the federal government, given the size of most agencies and the complexity of their processing loads. But as the microprocessor evolved and systems grew smaller and faster, we saw the rise of the PC and distributed-computing models.

Today the pendulum is swinging back to centralized computing and consolidation for two reasons: cost and security.

“When things are going well, nobody’s necessarily thinking about how to do things better,” says Alan Shark, executive director of the Public Technology Institute in Washington, D.C. Government operations at all levels — from local to federal — are “trying to figure out if it would make sense to share their systems with somebody else or set up a ‘consortium of equals’ in a central data facility.”

Consolidation With a Twist

While consolidating systems among different entities is one step that agencies are taking to get more for less (think Lines of Business, shared services and centers of excellence), other approaches gaining in popularity include the use of virtualization, thin clients and blade servers.

For instance, interagency teams of first responders in the Pacific Northwest are using thin-client kits to create on-the-fly networks that are more stable and more powerful than the jury-rigged networks they used to rely on to access data remotely. Most users don’t even know they’re on a thin client, says Peter Paul, a National Park Service technologist who also serves on one of 17 Pacific Northwest National incident management teams.

“It has the same look and feel as any computer,” he says. Paul and another Park Service techie, Don Winter, suggested using the kits after they had built a thin-client network at their home base of operations, Mount Rainier National Park (see story).

Other potential solutions point to even more logical consolidation of effort and expense. That’s the approach NASA has taken this summer with a review of its data centers and a developing plan to consolidate from about 75 data centers to a few enterprisewide hubs.

The space agency’s highly distributed computing model has created powerful but costly IT silos, says CIO Jonathan Q. Pettus, which is what is driving plans to consolidate and virtualize as much as possible. But Pettus is also quick to point out that while the trend for managing infrastructure is a return to a more centralized model, “that is not to say that we’re reverting back to the seventies and eighties model of how we manage IT as a function.”

Instead, the emerging model takes advantage of what’s best now, he says. “The other trend is that the function of IT — the decisions around applications and how they’re used — is moving in the other direction, where it’s much more dictated by the business and by organizations outside of IT.” (To read more, click here.)

The benefit of using these new technologies for the Park Service, for NASA — for any agency — is that they can make a lot of servers function like a single machine or one server function like a lot of machines. This flexibility lets IT shops customize services for users across their agencies and give those users the look and feel they need to do their jobs.

Photo: John Welzenbach
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT