Whose Job Is It, Anyway, to Identify the Best New Ideas in Government?
SOURCE: AP/Drew Angerer
Consider for a moment the task of an examiner at the Office of Management and Budget, one of many civil servants responsible for compiling President Obama’s budget for the 2012 fiscal year. This examiner has been cooped up in the New Executive Office Building since mid-September, hashing through agency budget requests and identifying which programs are worthy of federal funding.
To make this job tougher, the Obama administration is operating in a climate of fiscal restraint. The White House has requested that each nonsecurity agency submit a budget request that’s five percent below last year’s discretionary totals. Yet agencies continue to think up innovative ways to tackle public problems. Resources are scarce, and the OMB examiner is the first barrier between proposals for new government programs and the congressional appropriations process through which they’re funded.
Each proposal presents a compelling case for why it should be funded. Proponents describe the constituency of support for their new idea or detail the pervasiveness of the problem it aims to eradicate. Some go further, setting out why the proposed approach would be effective, wielding evidence in the form of academic research or successful pilot programs.
With every proposal boasting the best possible information on the program, how is our examiner meant to work out what is genuinely a good idea and what is not? The reality is that OMB examiners have dangerously few resources at their disposal to sort robust evidence from less compelling claims of program effectiveness.
As a result, budget decisions are too often reduced to games of “he said, she said.” Proponents of a new program present a persuasive argument for government intervention, while opponents toss around statistics to the contrary. The final call is left to the discretion of other policymakers, whether in the halls of OMB or in the chambers of Congress.
The upshot is this—rarely is anyone in the conversation capable of critically examining the source and reliability of the evidence behind each claim.
Consider the debate surrounding abstinence-only sex education during President George W. Bush’s first term. When the Bush administration decided to pump hundreds of millions of dollars into these programs, supporters often cited statistically unrepresentative surveys and lofty rhetoric about biology and human nature. At the same time, 10 state evaluations showed little change in teens’ behavior since the start of abstinence programs in 1997. In lieu of an independent review of the evidence on each side, the final funding decision was left entirely to political ideology.
But there is a way the federal government could deal with this problem effectively and efficiently. In a recent report, “Scaling New Heights,” CAP argued for the creation of “Institutes for Effective Innovations.” These small, independent entities—comprised of experienced researchers and experts in a specific policy area—would help to sort the good ideas from the bad. They would marshal evidence of what works in their field and provide an unbiased assessment of the effectiveness of individual approaches.
These new Institutions for Effective Innovations would not generate data for themselves, but collate all available information and assign an overall score. As a simple example, if an approach toward, say, reducing local poverty rates had been shown to work in 10 different states based on a series of randomized control trials, it would receive a high score. But if the approach had been tried in a number of places and the results were mixed, it would get a lower score. And where there was no evidence that the approach had ever worked it would get an even lower score.
So by the time the proposal hits a desk at OMB as one of hundreds of new programs competing for funding, the score would help with the sorting process. Facing strict resource limitations for new programs, programs with low scores would find minimal support.
The obligation to sort good ideas from bad ideas is not unique to OMB. Every state is looking for new ways to put their unemployed citizens back to work. Every major city has a senior official trying to identify the best way to reduce homelessness. And every school head has to determine the most effective way to tackle truancy or unruly behavior in classrooms. In each of these cases, decision-makers would surely benefit from ready access to reliable information on whether a proposed approach will work.
The Department of Education already embraces the concept of Institutes of Effective Innovation though its What Works Clearinghouse. Established in 2002 under the Institute of Education Sciences, the WWC is a collaborative effort between department staff and a nationally-recognized education research firm. The WWC reviews research on educational products, programs, practices, and policies, generating unbiased appraisals of evidence for and against different reforms in public education.
For instance, when the National Center for Educational Evaluation and Regional Assistance reported in June that students admitted to charter middle schools through a lottery scored no differently on math and reading assessments than students not offered admission, the WWC found that study was a well-implemented, randomized, controlled trial. But when researchers reported no differences in achievement between Milwaukee students who used a private-school voucher and students in Milwaukee Public Schools, the WWC analysts were less convinced. The study failed to prove that students in both groups were initially comparable in math and reading achievement, so the WWC deemed the results inconclusive.
Since 2008, the What Works Clearinghouse has debunked faulty studies on privatized school management initiatives in Philadelphia, a teacher performance-pay program in Arkansas, and the effect of after-school programs in high-poverty communities. The OMB examiner responsible for education programs can use these brief reviews to either validate or challenge claims that these programs should receive federal funding.
The WWC is a promising archetype. We need similar institutions within other federal agencies, helping to identify which new ideas in unemployment, housing, public health, crime reduction, entitlement programs, and other fields work and should be replicated.
Setting up these institutions would be relatively inexpensive. As one source of funds, appropriators could divert some resources from existing evaluation units that typically focus their efforts toward detailed evaluations of a small number of programs. Studies such as these are often too esoteric to be of use to decision-makers, and the information is only available after the program has been rolled out nationally. So even when an evaluation finds a program to be ineffective, the political constituency committed to maintaining the program is often too strong for any large-scale change to be feasible.
It’s much more useful to have good evidence available before budgeting decisions are made. That’s why agencies across government should follow the example of the Department of Education and set up Institutes of Effective Innovations.
Government budgeters at the federal, state, and local levels are faced with a daunting fiscal challenge in the coming years. We must take this simple step to make sure they’re working with the best information available.
Jitinder Kohli is a Senior Fellow at American Progress. His work focuses on government efficiency, regulatory reform, and economic issues at the Center’s Doing What Works project. John Griffith is a Research Associate with the project. Go to the Doing What Works webpage at the Center’s web site to learn more about the project.
To speak with our experts on this topic, please contact:
Print: Katie Peters (economy, education, health care, gun-violence prevention)
202.741.6285 or firstname.lastname@example.org
Print: Anne Shoup (foreign policy and national security, energy, LGBT issues)
202.481.7146 or email@example.com
Print: Crystal Patterson (immigration)
202.478.6350 or firstname.lastname@example.org
Print: Madeline Meth (women's issues, poverty, Legal Progress)
202.741.6277 or email@example.com
Print: Tanya Arditi (Spanish language and ethnic media)
202.741.6258 or firstname.lastname@example.org
TV: Lindsay Hamilton
202.483.2675 or email@example.com
Radio: Madeline Meth
202.741.6277 or firstname.lastname@example.org
Web: Andrea Peterson
202.481.8119 or email@example.com