You are here

Make it Count

To create successful metrics for security programs, keep your focus on measuring effectiveness, not performance.

The popular expression, “What gets measured, gets done,” may actually be more clever than true. It may be more accurate to say, “What gets measured … gets measured!” That’s all. Whether or not anything gets done with the measuring depends on the goal you are trying to achieve, on where your organization stands in the continuum of achieving that specific goal, and on management’s decisions and follow-through.

Metrics simply tell you where you are along the continuum. In some cases, metrics create a baseline from which to generate realistic goals for improved effectiveness. Management then must make the decision to set up a program for meeting these goals, taking measurements at milestones along the way and making course corrections as needed. In the end, metrics will tell you when your organization has achieved its goals.

To reach that end, you must convince management of the benefits of setting up a metrics program. What follows are steps to help you along the journey: selling management on metrics, understanding why you need a metrics program (and what the law says about metrics), figuring out the right metrics for your particular organization and building your program.

Making Your Case

Although most government managers realize metrics are important from a legislative and business perspective, they could use the help of an information assurance professional who understands security effectiveness.

Leaders generally need to understand how their organization’s programs are performing to assess whether or not management intervention is required or to portray results in understandable formats to their superiors. Metrics generally fall into the broad categories of “performance” and “effectiveness.” For example, providing training is a performance measurement (how many employees attended annual refresher training?); whereas demonstrating that training actually works is an effectiveness metric (were there fewer employees who were tricked by a social engineering attack after they received the training?).

Deciding to Set Up a Program

There are basically two reasons to set up a metrics program. The first reason grabs management’s attention, and the second reason helps obtain true buy-in:

• Federal agencies are required by law to use metrics to show they are in compliance with the Federal Information Security Management Act of 2002, the Government Performance Results Act of 1993 and the Clinger-Cohen Act

of 1996. (See the box below for details.)

• Metrics also happen to make good business sense. Metrics are tools to facilitate decision-making and improve performance and accountability through collection, analysis and reporting of performance-related data. IT security metrics must be based on IT security performance goals and objectives, yield quantifiable information (expressed in percentages or averages), be readily obtainable and replicable, and be useful for tracking performance and directing resources.

While it’s true that the law requires agencies to have metrics programs, what really sells an agency on metrics is that it makes good business sense to measure progress toward realistic, effective goals. Showing progress helps justify programs and resources. Metrics can help make the case for information security, which is often a hard sell. Also, metrics can help show a tangible return on investment. By measuring achievement toward goals and documenting success, metrics can create a justification for expending resources.

Metrics may be hard to define, collect and consolidate into meaningful information, but it’s worth it for the return on investment of both the metrics program itself and the program to which the metrics apply.

Illustrations: Elizabeth Hinshaw
"Metrics can help
show a tangible
return on investment."

— Bruce A. Brody and John R. Rossi

Picking the Right Metric

Metrics and measurements can be expressed in several forms: They can be performance indicators (green, yellow or red); report card “grades” (A, B, C, D, F); numbers on a scale of 1 to 10; a yes/no binary status; or a numeric count.

It’s critical that you use metrics that are relevant to your organization and to the mission you’re measuring. Determining which metrics are right for you is an important early step. For example, you can count the number of security certified/accredited information systems, or how many of your employees have received basic user-awareness training in information security, or the number of viruses captured by a particular antivirus product. This type of measure is simply a count.

An important distinction is necessary here. Where information security is concerned, agencies often measure performance, not effectiveness. This can lead to a false sense of security.

For example, a metric of performance might be the percentage of employees who completed an annual security awareness refresher course. Many federal agencies can claim 100 percent for performance, good for one full grade (10 points) on the annual FISMA report card. But no agency can claim to have measured the content, quality and effectiveness of that training.

Another example might be the certification and accreditation of information systems, required by FISMA. But because C&A may be an imprecise paper-based process, where risk is measured and accepted inconsistently across agencies and continuous monitoring of controls may be almost nonexistent, it is theoretically possible for 100 percent of an agency’s systems to be certified and accredited with not one of those systems actually being secure. Two more full FISMA grades (20 points) are therefore based on measures of performance, not measures of effectiveness.

The FISMA legislation is undergoing a revision, and early versions of the new language indicate that many of the issues that led agencies down the path of measuring performance versus measuring effectiveness will be cleared up. The likelihood of new legislation emerging by summer 2009 is considered high.

Creating Your Program

Here are eight steps you can use to build your metrics program:

1. Obtain management support to implement a strong metrics program. Explain how the law mandates collecting and reporting on metrics and how metrics make good business sense in that they help your organization measure its progress toward effective strategic and operational goals.

2. Review your organization’s mission statements, policies, plans, procedures, goals and objectives, and assess them against legislative and regulatory requirements, as well as against the agency’s effectiveness goals. Here is where you list the goals documented in the organization’s strategic plan, business plans and other documents.

3. Describe how the organization will achieve its goals and list milestones, dates and quantifiable objectives against which to map progress.

4. Select appropriate, quantifiable effectiveness metrics to indicate baseline, interim and final success. Be careful not to fall into the trap of focusing on performance when effectiveness is the proper measure.

5. Gather the metrics. You can use network logs, interviews with personnel familiar with the programs and progress, documentation (such as training records) and questionnaires (such as customer satisfaction surveys).

6. Analyze and present the results to management and key stakeholders. Take to heart the concept that visualization is the most effective method by which to gain management’s confidence and support. Consider a dashboard representation that simplifies multiple complex metrics in an understandable format.

7. Recommend that management make decisions based on the metrics, and plan the execution of these decisions. This is where the value of a metrics program lies. Remember that metrics are often referred to as “decision support.”

8. Evaluate the outcome of decisions against goals. This should be done from a perspective of effectiveness, not as a

means to micromanage processes within an organization.

Members of the bureau include federal IT security experts from government and industry. Bruce A. Brody and John R. Rossi were the authors of this peer-reviewed article. For a full list of bureau members, go to

Nov 07 2008