Center for American Progress

RELEASE: The Secret to Programs that Work: New Tools for Program Design and Evaluation
Press Release

RELEASE: The Secret to Programs that Work: New Tools for Program Design and Evaluation

By Jitinder Kohli, William D. Eggers, John Griffith | February 2, 2011

Read the full report (pdf) Download the introduction and summary (pdf)

Download a two-pager on Reviewing What Works tools (pdf)

Download a two-pager on Design for Success tools (pdf)

Less than one-third of Americans have confidence that the federal government can solve problems, according to a recent Center for American Progress survey. The sentiment may be worse at the state level. The Pew Center on the States recently found that less than 20 percent of respondents in California, Illinois, and New York trusted their state governments.

What accounts for such widespread frustration?

It’s not just the economy. As Vice President Joe Biden has said, there’s a feeling across America that “Washington, right now, is broken.”

And in some ways, it is.

Washington for years has been shooting at big targets and continues to miss the mark. After spending more than $1 billion, the government last month scrapped a troubled “virtual border” plan plagued for years by cost overruns and delays. In preparation for the 2010 Census, the Commerce Department spent two years developing handheld computers to replace millions of costly forms and maps used by field workers. The initiative failed and workers reverted to pen and paper.

At the root of such failures is faulty design. The way public policy is designed today results in programs that sound good in hearings but don’t work in the real world. This paper diagnoses common design flaws, and proposes a kind of advance-warning system to help policymakers distinguish between programs with a high chance of success from those likely to run into problems down the line.

Five common design flaws in government programs

After consulting about 200 government experts over six months, we discovered a handful of common problems that can doom government programs before they begin.

The wrong approach

Decision makers often forget to ask the most basic question when considering new programs: Do we really need to do this?

Insufficient evidence

Government programs too often sprout from little more than a policymaker’s hunch, with scant evidence they’ll actually work.

Poor implementation planning

Program advocates often get so caught up in the political process that they skip over crucial implementation issues in the design phase, leading to cost overruns and timeline delays.

Misunderstood incentives

Programs created without a precise understanding of the incentives embedded in them are vulnerable to deception or “gaming.”

Insufficient performance assessment and refinement

The best businesses are committed to constantly monitoring and improving the performance of their products. But not enough government programs are designed to report whether they’re actually working.

A checklist-inspired solution: Design for Success

In his book, The Checklist Manifesto: How to Get Things Right, Atul Gawande revealed the power of simple checklists to prevent systemic failure in a variety of contexts, from operating rooms to investment companies.

Gawande’s thesis was a starting point for this project. Could a checklist-type system be used to predict a government program’s likelihood of success? Might it prevent the all-too-common design flaws that lead to implementation crises?

In this spirit, we sought to define the characteristics of a program that was likely to succeed and then ask proponents to go through a checklist-type process early on.

An effective checklist has two components: A list of well-designed questions, and a process for how and when to ask them. We narrowed our focus to five broad areas, covering questions that can help policymakers avoid common design flaws:

  • Approach: Is this the right approach to address the problem?
  • Evidence of effectiveness: Has the program been successfully implemented elsewhere? Has there been a rigorous evaluation of its impact?
  • Incentives: Does the program design minimize the risks of cheating the system?
  • Implementation: How does the agency responsible for administering the program plan to secure the necessary staff, skill base, and technology infrastructure? Are the plans and timelines reasonable?
  • Monitoring and rethinking: Are there clear indicators that define success? Is there a plan for collecting timely and accurate data to monitor performance?

These questions underpin a series of questionnaires and checklists that probe the five common design flaws we identified.

Of course, our “Design for Success” tools themselves are only valuable if used the right way. So we also dedicated considerable time to working with experts on how to best fit these tools into the program-creation process. Here’s the process in a nutshell:

STEP 1: The new program proponent uses a “program checklist” to design an initiative.

STEP 2: The proponent completes a “program details” questionnaire that probes the five key components of a successful program described above.

STEP 3: A neutral party, such as an interagency panel or legislative committee, completes a “program assessment” questionnaire to evaluate the likely success of the new program.

STEP 4: Decision makers use the information on both questionnaires to guide their scrutiny of the program.

Reviewing What Works: Tools for existing programs

Having established a process to predict the likelihood of a new program’s success, we next adapted these tools and procedures to an equally important task: evaluating the effectiveness of existing programs.

At a time of looming budget cuts, Washington urgently needs a better way to distinguish the most effective programs from those in need of reform—else it risks slashing good programs simply because they have less political support.

Our tools for evaluating existing programs build on two major recent government performance milestones: The Obama administration’s recently adopted 128 High Priority Performance Goals, and the Government Performance and Results Modernization Act, signed by the president in January. The law requires the executive branch to adopt cross-cutting outcome goals and to report regularly on progress toward achieving those goals.

The “Reviewing What Works” process evaluates programs across a policy area against these goals, using interagency panels as arbiters of effectiveness. Again, the process in a nutshell:

STEP 1: The government forms interagency panels by policy area.

STEP 2: These panels define common goals and list programs that contribute to these objectives on a “policy strategy” questionnaire.

STEP 3: Program managers complete a “program effectiveness” questionnaire for each initiative.

STEP 4: The interagency panels complete “program evaluation” questionnaires to determine the effectiveness of individual programs.

STEP 5: The questionnaires inform decisions about which programs to expand or reform.

In the existing programs context, the key questions revolve around five key concerns:

  • Impact: What impact does the program have on the goals across government in the particular policy area?
  • Collaboration: Does the program coordinate with other programs to maximize collective impact and minimize duplication?
  • Benchmarking: What is the relative effectiveness and cost of the program? •
  • Operational excellence: Is the program well run? Have there been delays or cost overruns?
  • Adaptability: Has the program sought to learn from experience? Has it improved in response?

A time to act

More than 80 percent of Americans think the federal budget process should be reformed so that spending decisions are based on what works, according to a 2010 Center for American Progress survey.

This demands more prominent consideration during the design phase about whether a new program is likely to work. Equally, we need a system that scrutinizes existing programs for effectiveness, not merely political attractiveness.

We can no longer defer proper consideration of which programs are most and least effective. The time to act is now. We believe this report shows a way forward.

Read the full report (pdf) Download the introduction and summary (pdf)

Download a two-pager on Reviewing What Works tools (pdf)

Download a two-pager on Design for Success tools (pdf)

Event: Transforming Program Performance

To speak to CAP experts, please contact Megan Smith at [email protected] or 202.741.6346.