Report

Performance Reviews That Work

Four Case Studies of Successful Performance Review Systems in the Federal Government

John Griffith and Gadi Dechter examine four successful performance review programs in the federal government.

The Federal Aviation Administration has been conducting performance reviews in  some form since the early 1990s. (AP/Eugene Hoshiko)
The Federal Aviation Administration has been conducting performance reviews in some form since the early 1990s. (AP/Eugene Hoshiko)

Download the case studies (pdf)

Introduction

One of President Barack Obama’s first agenda items this year was to sign into law a little-known bill that could transform how the federal government goes about its business.

The Government Performance and Results Modernization Act of 2010 revamps and streamlines 17-year-old performance-monitoring rules for federal agencies. The new framework introduces a series of data-driven reviews that track Washington’s progress toward agency-specific and government-wide goals. Specifically, agencies and the Office of Management and Budget will now have to conduct formal reviews of progress toward “priority” goals at least once a quarter.

The connection between high performance and ongoing organizational self-assessment is well known. It’s what underpins the “culture of discipline” that Jim Collins identified as a characteristic of great companies in his 2001 best seller, Good to Great.

“It is not enough to convene meetings,” wrote Shelley Metzenbaum, OMB’s associate director for performance and personnel management, in a 2006 paper about performance accountability. “It is essential to get the tone right. Collins, in his study of successful corporations with breakthrough performance gains, identified four stylistic approaches used by successful companies for constructive feedback: lead with questions, not answers; engage in dialogue and debate, not coercion; conduct autopsies, without blame; build ‘red flag’ mechanisms.”

Federal government initiatives going back decades have attempted to hold agency managers accountable for performance—from Lyndon Johnson’s “Planning-Programming-Budgeting-System” to Richard Nixon’s “Management by Objectives” to Bill Clinton’s 1993 inauguration of GPRA. More recently, the public sector has embraced data-driven review processes, adapting the New York City Police Department’s CompStat system to municipal and statewide models like Baltimore’s CitiStat and Washington State’s Government Management Accountability and Performance tool.

And yet, federal agencies have struggled to implement ongoing performance review processes that become part of an institutional culture, survive political transitions, and avoid “measurement fatigue” among participants.

Under a revamped GPRA, agencies will by law have to conduct these reviews. The Center for American Progress, a strong advocate for accountability in government, strongly supports this feature of the law and called for it in the recent publication, “From Setting Goals to Achieving Them.” But if these constructive feedback or “interactive inquiry” sessions, as Metzenbaum has called them, are not well executed, we will have missed an opportunity to improve government efficiency—and to restore the public’s trust in its public servants.

Key questions

To that end, we’re interested in provoking a discussion about what makes a performance review work. What are the key characteristics of good reviews?

This short publication looks at performance reviews within the federal government that appear to be already fulfilling the GPRA Modernization Act’s simple but profound mandate: Set clear goals, develop a strategy for achieving them, monitor progress, and hold government officials accountable for the results.

We hope the following four case studies, conducted through interviews with NASA, FAA, VA, and IRS officials, are the beginning of a discussion about how OMB and agencies can set up and run effective performance reviews that endure—and become embedded within a data-driven culture of excellence. Among the questions the following case studies provoke:

  • How frequent should performance reviews be? Are quarterly reviews sufficient, or should they be more frequent?
  • Who should attend? Should attendance be mandatory or voluntary?
  • What kind of incentives should be built into a review’s design? Do certain incentives, like linking employee pay to performance targets, create unintended consequences?
  • Who should lead the review? Does the participation of top agency leaders matter?
  • What kind of metrics should be used to track progress? Are outcome-based data more useful than measures of inputs?

We hope that implementation of the new GPRA law in the coming months means we’ll soon be able to add to this list of successful case studies, and begin to tease out answers to these and other important questions.

To be sure, we don’t expect there to be a uniform mold for a successful review, and each agency will want to build a process that best supports its priorities and resources. Whatever shape these reviews end up taking, the following examples give us hope because they show that data-driven performance management has taken root in some corners of the federal government—and that it can lead to real improvements.

NASA Baseline Performance Review

Then NASA chief engineer Christopher Scolese first recommended regular performance reviews as a forum for monitoring implementation of the agency’s strategic plan. Scolese describes the early reviews as “crisis management.” His staff caught wind of an issue, worked through ways to fix it, and moved on. After a stint as acting administrator he still leads the voluntary monthly confabs.

In the days leading up to each monthly Baseline Performance Review, NASA program managers prepare monthly performance reports for the associate administrator and associate deputy administrator, highlighting one or two areas of particular interest.

A team of independent analysts from the offices of the chief engineer, program analysis and evaluation, and safety mission assurance prepare a separate performance assessment before the meeting to verify the program staff’s conclusions.

During the meeting, senior NASA leadership and program staff review data from both reports. The discussion turns into a final report of key findings and major areas of concern, which is sent to the agency administrator. The report evaluates the technical condition, cost, schedule, and overall performance for each program on a color-coded scale.

Scolese, now NASA’s associate administrator, attributes BPR’s ability to survive the 2008 presidential transition to a concerted effort from senior civil servants. When the new deputy and associate deputy administrators arrived in 2009, senior career staff briefed them on the BPR process, and stressed the meeting’s importance on day-to-day operations of the agency, Scolese said. Both appointees became enthusiastic supporters of the review process after sitting in on their first BPR, he said.

The meetings have quickly resolved a range of technical, operational, and financial problems, among them schedule adjustments for lunar reconnaissance orbiter missions, invoice payments, and Freedom of Information Act requests, according to the agency.

Federal Aviation Administration performance reviews

The Federal Aviation Administration has been conducting performance reviews in some form since the early 1990s. Mort Downey, then deputy secretary for transportation, convened regular meetings around so-called “performance agreements” with the administrators of each agency in the department. Downey met with the FAA administrator every month to check in on progress toward the agency’s priority goals, with a focus on flight safety.

Since then, each administrator has approached the reviews differently, according to Toni Trombecky, the agency’s strategic planning manager. The current system of monthly reviews is organized around a core set of monthly performance targets covering data on flight safety, capacity of the country’s air system, international travel, and organizational development.

The data tracked in the reviews are tied to 31 agencywide targets that are aligned to 170 “strategic initiatives.” The manager responsible for each target reports to the 17-person management board during each meeting. After a roundtable discussion, the board issues an 80- to 100-page “book” of findings.

Trombecky attributes the monthly review process to the country’s longest streak without a commercial air fatality. The beginning of that 39-month streak coincided with the launch of the monthly reviews.

The FAA performance review owes its longevity in part to the office’s political and pay structure. The FAA administrator is by law appointed to a five-year term, insulating it by design from the presidential election cycle. When a new president is sworn in, the agency usually does not have to change its leadership. FAA employees are also highly motivated by the agency’s performance targets: Their pay depends on it. As a “pay-for-performance” organization, the agency must achieve at least 90 percent of its targets before any employees receive annual pay raises.

Buy-in from senior agency leadership has also been critical to the initiative’s endurance, Trombecky says. The review format has been kept flexible to allow each administrator to reshape the process around his or her personality. That has encouraged successive leaders to not only embrace the system, but also take on increasingly larger roles in it. “You keep people focused. 

Department of Veterans Affairs “Monthly Performance Review”

Between 1994 and 1998, then Veterans Affairs Under Secretary for Health Kenneth W. Kizer transformed the veterans’ health care system by establishing a framework for measuring and monitoring quality of care. The agency’s Monthly Performance Review process builds on Dr. Kizer’s legacy of data-driven performance management.

Every month, the deputy administrator’s staff compiles a book of high-level performance analysis that tracks progress toward the agency’s strategic goals. For example, one of the agency’s goals is to improve health care accessibility to veterans. So the monthly book includes regional data about how many veterans have to wait more than 30 days to see their primary care physician.

While the staff members suggest what should be highlighted in each meeting, the deputy secretary ultimately determines the agenda, according to a senior VA official.

Agency leaders attribute the monthly per- formance reviews, or MPR, with improved customer satisfaction because it has enabled managers to more quickly track and improve statistics such as processing times for disability claims. The MPR has also exposed flaws in the agency’s human resources and acquisition systems, helping leadership understand and speed up processes to hire and train new staff.

The current format of the VA Monthly Performance Review has survived two political leadership transitions. One secret to its endurance has been a conscious decision by senior civil servants to present the MPR to incoming political appointees as a routine and inevitable part of the institutional culture. When a new deputy secretary arrives, he or she is told: “This is what we’ve been doing, we think you’ll find it of value, and we have one scheduled this week,” according to a senior manager.

Internal Revenue Service “Business Performance Reviews”

The Internal Revenue Service began its Business Performance Review System in 2001, in the wake of a large-scale reorganization effort within the agency. Then-commissioner Charles Rossotti instituted the BPR to fundamentally change the way the IRS reviewed and managed the execution of its strategic plan.

Unlike the agencywide performance reviews described in the other case studies, the IRS holds multiple quarterly reviews organized around its major institutional units: operations support, service and enforcement, and the commissioner’s office. Each category contains several “business divisions,” and each holds its own quarterly “business performance review,” or BPR.

The deputy commissioner for operations support, for example, convenes five BPR meetings each quarter.

Here’s how it works. Two weeks before each meeting, staff from each business division compile a document that lays out key performance data and identifies priority issues. Program managers within a business division then work with agency leadership of that division to develop a meeting agenda. After each meeting, the analysis documents are provided to the IRS Oversight Board, a nine-member presidentially appointed body that oversees IRS management.

The BPR reviews have helped IRS leadership run internal operations more efficiently and better understand the impact of their projects, senior officials say. For example, the reviews highlighted the effect of education and outreach services on reducing call volume, allowing customer service efforts to be more effectively targeted.

The BPR process has survived three commissioners, and two acting commissioners, in part because of the IRS’s unique political and organizational structure, officials say. While the IRS commissioner is a political appointee, his two deputy commissioners are career civil servants. So even though the commissioner has the ultimate say on management issues, two-thirds of the BPR leadership remains in place through political transitions.

Many deputy commissioners are promoted from within the agency, so they tend to have been on the receiving end of performance reviews as they advanced in their careers, leading to stronger institutional buy-in. The far-flung dispersal of IRS offices also bolsters institutional support for the BPR ritual, officials say. With about 750 regional offices scattered around the country, the reviews are a rare opportunity for “face time” with the boss.

Download the case studies (pdf)

John Griffith is a Research Associate and Gadi Dechter is Associate Director of Government Reform at the Center for American Progress.

The positions of American Progress, and our policy experts, are independent, and the findings and conclusions presented are those of American Progress alone. A full list of supporters is available here. American Progress would like to acknowledge the many generous supporters who make our work possible.

Authors

John Griffith

Policy Analyst

Gadi Dechter

Managing Director