While the IC’s research organization looks into adding security to cloud environments, in the here and now, intelligence agencies are sharing more data.
When in Rome …
That wasn’t exactly the logic behind the decision of the Immigration and Customs Enforcement’s IT security crew to bring a National Security Agency blue team along when it toured ICE offices in Italy last summer, but it wasn’t far off, either.
Rather than “do as the Romans do,” it was more “see what ICE employees in Rome do” when it comes to network and systems security. NSA lends out teams of super-savvy security experts to assess systems, using an arsenal of tools that range from policy reviews to penetration testing and attack-defend exercises.
“There weren’t any real surprises for us, but it was a pretty eye-opening experience for the staff at the facility,” says Gil Vega, ICE’s chief information security officer. “There was a lot of value in having another organization validate our assumptions.”
The exercise is illustrative of the shifting focus of cybersecurity in the government, mainly that IT security teams should no longer toil away behind the scenes trying to lock down systems and chase down the latest attack vectors. Instead, information assurance teams and CISOs are coming out in the open and beginning to work in concert with CIO organizations as well as with their counterparts at other agencies.
This all sounds very promising. But for folks inside agencies who defend the perimeters of the government’s networks on a daily basis — like Vega — the question is, what’s happening now and, more important, what can federal IT and systems security staffs do to keep the bad guys at bay and to shore up vulnerabilities?
There are takeaways that agencies can apply that emanate from broader, cross-government initiatives: Use diagnostic tools so you know what’s going on inside your network and at its gateways; don’t go it alone — reach out for advice from other IT and security teams; work toward a baseline level of security tied to and updated for known threats; and heighten your post-attack forensic capabilities.
“We don’t know what the next attack is or when it is coming, but we like to know our baseline approach for how we will respond,” says Vega. To that end, the key metric to capture is network events, he says. “It’s easy to understand that if you are not seeing anything, then you are not in a position to protect your environment.”
It’s requisite for the IA staff to have scanning tools and report logs for every piece of the systems infrastructure, says Patrick Howard, CISO for the Nuclear Regulatory Commission. The reports, logs and event monitors must be absolutely representative of what’s going on.
Daily attempts to illegally breach systems at the Pentagon
SOURCE: Pentagon CIO
For practical purposes, this means asking the CIO, IT and program organizations to turn on these features in applications and systems they run. Most operating systems and network management tools and many application products have the capability to produce audit trail diagnostics and to send alerts. On top of that, an agency might want to layer a vulnerability scanning tool, says Vega, whose staff is pushing out a scanning program across ICE in conjunction with Homeland Security Department initiatives.
But to make sense of the information and spot traffic behavior that’s questionable, “you also have to know your agency’s business and mission better than anyone else,” says Howard. “It all boils down to having a good risk management view of what’s going on in your environment.”
At the Defense Department, the IA staff also advises taking snapshot analyses of individual security tools. These reviews create metrics by answering the question: “We bought this tool and what effect did it have?” says Capt. Sandra Jamshidi, director of the Defense-wide Information Assurance Program.
To do this, DOD sets up controlled exercises using modeling simulations to gauge incremental improvements. The metrics prove handy when defending budgets, Jamshidi adds.
Feds and industry security analysts point to the integrated nature of government systems as a ground-zero concern for security. “If you can get in one system, you can get in them all,” says Scott Borg, director of the U.S. Cyber Consequences Unit, a federally funded organization that analyzes critical infrastructures to identify vulnerabilities. (See his column, here.)
That’s why Howard’s first line of defense is, “Ask someone else, because someone will surely have confronted your problem already.”
These tend to be back-channel communications, says Vega, who adds that he still reaches out to feds who have moved on to industry and with officials in the intelligence agencies that he worked at before his DHS posting.
Vega adds that “what we find, we share at the highest level and with US-CERT.”
Mischel Kwon, director of the U.S. Computer Emergency Readiness Team, says that vulnerability reporting by agencies continues to grow, which doesn’t necessarily reflect more or different weaknesses, but rather more willingness to share information. “This is a team sport, and there’s time for all of it if we all play.”
It also makes sense to “diversify the security liability” outside of the CISO shop, Vega points out. Consider his directorate within DHS by way of example: 600 sites in the United States and 60 or more overseas. “My approach is to create disciples,” he says.
To convey the import of cybersecurity, the Defense IA team starts each of its briefings with a threat update. “It grounds the conversation before you begin,” Jamshidi says. “It’s difficult for folks to understand the tools. You have to be able to articulate the technical information into the operational mission because it’s hard for folks to focus on things they don’t immediately understand.”
There has to be defense in depth, says Howard, which means that all the pieces of the security approach must work in tandem. To allow time to look at evolving threats means you have to balance that work against setting controls and keeping them up-to-date.
With literally millions of threat vectors out there and would-be barbarians peppering the government’s perimeter systems several thousand times a day, that challenge seems almost absurd. But it needn’t be, if agencies can agree on a set of known vulnerabilities to address and then build on, says John Gilligan, a former Air Force CIO turned consultant who’s heading up an effort to define 20 such categories under the umbrella of the Consensus Audit Guidelines.
“The 20 will be a foundation. That will be a livable house. You can live in reasonable security,” he says.
Topping the list is setting and locking down standard configurations for all hardware and software — basically the Federal Desktop Core Configuration on steroids. Why? Because, consistently, NSA reports that most vulnerabilities result from flawed configurations or errant settings, Gilligan says.
“Getting there is going to be tough. To lock down configurations will take years,” he says. “But it will give a lot more control over systems environments and networks, responding to attacks, and maintaining patches and updates.”
In the meantime, Kwon encourages agencies to keep focusing on their FDCC efforts because they are working — “a lot of the low-hanging vulnerabilities are no longer there.” But FDCC, which established set configurations for systems running Microsoft Windows and Vista, looks at security from the box level; to create defense in depth, she stresses, the government needs baselines for each layer.
Agencies have been diligent in tackling FDCC and reducing perimeter exposure by consolidating Internet gateways under the Trusted Internet Connection initiative, Kwon says. By extending these types of threshold-based security efforts, the government will be able to lessen the overall vulnerability that exists because of the extensive interdependencies, she says. Plus, agencies will be able to adapt to changes in threats more quickly and with more precision, Kwon adds.
The purpose of explicit guidelines, currently in a draft form developed by several dozen security officials in government and industry, is to give agencies practical tools, Gilligan says. The National Institute of Standards and Technology 800-series guidance provides solid advice, but it’s flexible and open to interpretation. To create consistent layers of security, the information “can’t be theoretical. It’s got to be specific,” he says.
After an event, attack-based metrics come into play. It’s crucial to replay an event and determine exactly what happened, Vega says, so that you can tweak systems and harden networks to prevent a recurrence. The goal is to use your forensic tools to do a deep-dive analysis, “not only to investigate, but also to sanitize.”
“We believe that those who develop systems need to have the same tools that we have, and that we need to have the ability for full-packet capture,” he says. That lets the event response team replay an incident repeatedly to dissect it.
Finally, the team needs to report the findings so other agencies — and organizations beyond the government — can address similar weaknesses.
This part of the security effort is admittedly tough, for two reasons, Vega says. First, some agencies simply don’t have the money to spend on forensic tools and training. Second, these efforts take time. But it’s time that agencies must take, he adds, because, “IT security has become as essential as food, water and shelter.”