“There was a lot of interaction with our clients” at the space agency’s 14 centers, says NASA’s John McDougle. “We had to make sure we understood the specific nature of the traffic.”
Dec 31 2009

Tried and True

NASA finds that the processes it uses for rocket science work for replacing its WAN — on time and under budget.

If John McDougle wanted to drag his heels on finishing his wide area network project, he had a good — and unique — excuse: He had to wait for the space shuttle Discovery to launch and return safely to Earth.

Who’s going to argue? Although NASA maintains Discovery’s flight data on a separate network, the deputy CIO at the Marshall Space Flight Center and manager for the WAN Replacement Project was taking no chances. As a precaution, McDougle put everything for WANR on ice for 12 days in July while Discovery rocketed into space and completed its mission. Despite that work stoppage, McDougle and his team still finished the project just under its 30-month schedule and within the $20 million budget.

The project, which wrapped up not long after Discovery touched down, is part of the broader NASA Integrated Infrastructure Initiative to increase bandwidth and ease remote systems maintenance. It also lets NASA comply with the Office of Management and Budget’s mandate that all agencies implement Internet Protocol Version 6 by July 2008. The forthcoming standard will, among other things, increase the availability of IP addresses and the quality of high-bandwidth multi­media sent over Internet connections.

The WANR team more than hit the mark: It increased overall network bandwidth by 1,017 percent, conspicuously improved network availability and drastically improved the network’s survivability during a disaster by creating a redundant network and rerouting cable to disperse the risk.

That’s not to say everything went without a hitch. As the massive project sped along, small speed bumps arose. In one instance, NASA had trouble accessing facilities at the Air Force’s Dryden Flight Research Center because some of the buildings are not open 24/7 — access the WANR team needed. Elsewhere, unplanned building maintenance delayed network upgrades.

None of these blips amounted to much by themselves, but collectively they meant the official transition to the new network, tentatively scheduled for June 29, had to wait until Discovery returned July 16 (still within the total schedule window).

Big, Big, Big

The sheer breadth of the network is impressive. Over the course of a year, the NASA team oversaw the laying of thousands of miles of fiber-optic cable to link 14 NASA data centers and five commercial data centers, known as carrier-independent exchange facilities (CIEFs) in Atlanta, Chicago, Dallas, San Francisco and Washington, D.C. Switches and routers from Cisco Systems and Juniper Networks connect them all.

1,017%



The bandwidth boost NASA accomplished through its WAN
replacement.
Source: NASA-WANR team

Of course, it’s a network’s speed that really matters, and NASA’s new WAN rockets data along. The core ring connecting the five CIEFs is made of OC-48 fiber optic, which transmits data at nearly 2.5 gigabits per second. Connecting the centers to one another and to the core ring is OC-12 fiber optic, which communicates at 622 megabits per second, and OC-3 fiber that communicates 155Mb/sec. Together, they replace NASA’s asynchronous transfer mode topology, which could send data across long distances at rates of 155Mb/sec or 622Mb/sec, and T1 lines, which top out at 1.5Mb/sec.

From July until the end of September, NASA kept both networks live while McDougle and his team ran final checks and undertook a formal decommissioning process of the older WAN.

“The project was very successful, was completed on budget and with significant new capabilities at lower operations and maintenance costs,” NASA Chief Technology Officer John McManus says.

Follow the Rules

How did they get the job done on time and within budget? McDougle says they went by the book: NASA Procedural Process, the space agency’s 174-page compendium of best practices, commonly known as the NASA 7120 regs. Some modifications to the regulations were necessary because the building involved a network rather than a spacecraft. Still, McDougle and his team benefited by following the 7120’s ­directives, which call for using program commitment agreements (PCAs) and creating governing program management committees.

In a NASA project, the PCA is essentially a contract between the NASA deputy administrator and the mission support office director. In fact, NASA leadership will not authorize a project without a signed PCA. The agreement also serves as a handy executive summary of a project’s plan, helping a harried and hierarchical workforce stay on track. It also documents all the objectives; technical performance, schedule, cost, safety and risk factors; internal and external agreements; and independent reviews.

In this case, the PM committee was primarily responsible for evaluating the cost, schedule, safety and technical content for the WAN project to ensure that it met the commitments outlined in its PCA.

The committee held complex and well-documented review meetings to consider technology options. For example, in the early days of the WAN project, the WANR team looked at four network topologies. That number grew to 10, back to seven, then to four again, and finally to one. “There was a lot of interaction with our clients,” says McDougle, referring to the 14 data centers and their various departments. “We had to make sure we understood the specific nature of the traffic.”

For example, the Langley Research Center in Hampton, Va., requires bandwidth of about 350Mb/sec, the Marshall Space Flight Center in Huntsville, Ala., requires bandwidth of more than 1.6 gigabits per second, and the Kennedy Space Center needs more than 1Gb/sec bandwidth.

In each case, the intricate review process and back-and-forth between the centers and the PM team led to upward adjustments in the bandwidth requirements. When the initial critical review process wrapped up in November 2004, the team estimated that Kennedy only needed about 738Mb/sec and Marshall just more than 1Gb/sec.

By late September, the team had deployed the WAN and decommissioned the ATM network. That left the WANR team with one short-term 7120 project goal: a post-completion review. Mission accomplished.

 

Photo: Owen Stayner
Close

Become an Insider

Unlock white papers, personalized recommendations and other premium content for an in-depth look at evolving IT