CAP en Español
Small CAP Banner

Recent Evaluations of Performance-Pay Programs

SOURCE: AP/LM Otero

A chemistry teacher in front of her classroom in Texas. Texas has made the largest single state investment in performance-pay programs in the country with $868.1 million over a six-year period dedicated to the development of three performance pay programs.

    PRINT:
  • print icon
  • SHARE:
  • Facebook icon
  • Twitter icon
  • Share on Google+
  • Email icon

Download this memo (pdf)

Many of the existing teacher pay-for-performance programs are too new to support a large body of rigorous research evidence, but there are some promising findings upon which states and districts can build programs and policy.

It’s clear that the status quo isn’t working. The way we currently pay teachers in this country—based on years of teaching experience and educational credits—does little to attract talented teachers to the profession and particularly to schools that serve high proportions of low-income children. This is not surprising, since research shows that post baccalaureate education and years of teaching experience beyond the first few years are not related to a teacher’s ability to improve student achievement. If a primary goal for schools is to improve student achievement, then their largest educational expenditure—teachers’ salaries—should be allocated in ways that are aligned with this goal.

School district human resource strategies for teachers—including evaluation systems and professional development—are currently as weak as compensation systems. A number of recent reports have found that most teacher evaluation systems are superficial, rate all teachers as satisfactory or excellent, and provide little feedback to teachers to improve their performance. Compensation reforms can be integrated with these other systems and drive improvements in these areas, as well. For example, because pay-for-performance programs must rely on high quality data and evaluation systems, implementing a pay-for-performance program can be the impetus for improvements to these other systems.

Experience with implementing the Teacher Advancement Program—a comprehensive school reform that includes professional development, rigorous teacher evaluation, career ladders for teachers, and pay-for-performance—shows us that programs need time to evolve and improve. The TAP model is the basis for a number of the Teacher Incentive Fund grants—federally funded pay-for-performance programs for teachers and principals in high-needs schools. TAP has continually improved its model based on evaluation findings, and now research indicates that it significantly improves student achievement at the elementary level.

States, school districts, and non-profit organizations need to design programs that have a feedback and evaluation mechanism. This will allow them to change and improve programs over time. Program managers also need funding and time to develop innovative programs, evaluate them, and make mid-course corrections that improve them.

There are a range of program designs that policymakers and educators might want to consider in designing new programs and policies, detailed below. The programs vary along a number of dimensions: individual versus school-wide incentives, the measures upon which incentives are based, and whether the reforms include professional development and different roles and responsibilities for teachers.

Achievement Challenge Project

The Achievement Challenge Pilot Project was a pilot pay-for-performance program that operated in Little Rock, Arkansas, from 2004 to 2007. The program was active in five schools by the final year of the program and rewarded teachers with bonuses for student gains on standardized assessments. The purpose of the program was “motivating faculty and staff to bring about greater student achievement gains.” Researchers Gary Ritter and colleagues at the University of Arkansas evaluated the program by analyzing student test data for all students in Little Rock elementary schools between the 2004-05 and 2006-07 school years. They also conducted teacher surveys and interviews to gain an understanding of the program’s effects on teacher attitudes and school climate. They found that the program had positive effects on student achievement in mathematics and reading. Teachers also had somewhat positive attitudes toward the program. But because participating schools self-selected into the program, these results should be interpreted cautiously.

Chicago Public Schools’ Chicago TAP

Chicago Public Schools began implementing the Recognizing Excellence in Academic Leadership program in 10 schools in 2007-2008. The REAL program is based on the Teacher Advancement Program and is now known as Chicago TAP. It will serve up to 40 schools over four years and is supported with a $27.4 million Teacher Incentive Fund grant as well as smaller foundation grants and district support. Like other TAP programs, the Chicago program includes multiple career paths for teachers; ongoing job-embedded professional development; instructionally focused accountability using a rigorous, objective evaluation rubric and multiple evaluations each year by trained staff; and performance-based compensation for teachers.

Mathematica Policy Research, Inc. is evaluating the program, and published its first-year report in April 2009. Researchers Steve Glazerman, Allison McKie, and Nancy Carey designed a trial in which eight schools were randomly selected for first-year implementation out of 16 schools that had been approved for the program, with the remaining eight schools scheduled to enter Chicago TAP the following year. They also identified 18 similar Chicago schools not involved in Chicago TAP as a comparison group for additional analyses.

The first year report found that Chicago TAP significantly improved teacher retention. Chicago TAP teachers were five percentage points more likely than non-TAP Chicago teachers to return to their schools the following year. Teachers in Chicago TAP schools reported receiving significantly more mentoring and support than their peers in similar schools. The study did not find an effect on student achievement large enough to be measured in this sample after six months of implementation. Subsequent reports will examine student outcomes over a longer period of time and in a larger sample of schools.

Denver’s Pro Comp Program

Incentives in Denver’s Professional Compensation Program for teachers are tied to a variety of teacher inputs and outputs. These inputs and outputs can be grouped into four categories: knowledge and skills, professional evaluation, market incentives, and student growth. There are a number of elements within each of these categories that influence the teacher’s salary. In the category of knowledge and skills, for example, teachers can earn salary adjustments for completing professional development units and completing an advanced degree and license. In the category of student growth, teachers whose students’ scores on the Colorado Student Assessment Program exceed district expectations for growth receive a 6.4 percent bonus, and teachers in schools designated as a “Top Performing School” based on the Denver Public School Performance Framework receive a 6.4 percent bonus, as well.

Researchers at the University of Colorado’s School of Education conducted a mixed-methods evaluation that included an analysis of student achievement trends using value-added methodology and a survey to assess principal and teacher attitudes toward the program. The researchers found in their first-year report on the evaluation that teachers who chose to participate in Pro Comp produced slightly higher student achievement in reading and mathematics. The researchers were not able to attribute this difference directly to Pro Comp, however, because it was possible that more successful teachers opted into the program. In addition, a plurality of participating teachers had favorable views of the program, and agreed that “ProComp can motivate teachers to improve instructional practice and ultimately improve student achievement.”

Mission Possible program

The Mission Possible program is a comprehensive teacher-incentive program in the Guilford County School System in Greensboro, North Carolina. It is intended to attract and retain effective teachers in struggling schools. The program began in 20 schools during the 2006-07 school year and eight schools were added in the 2007-08 school year with a Teacher Incentive Fund grant from the U.S. Department of Education. The program entails ongoing professional development, collaborative support, and smaller class sizes. Teachers are offered recruitment or retention bonuses to work in Mission Possible schools and become eligible for performance bonuses. Recruitment and retention bonus amounts vary by grade and subject level, but range from $2,500 for teachers in grades K through 5 to $10,000 for Algebra I teachers, a subject shortage area and former area of poor performance for the district.

Teachers in grade levels and subjects that are part of the state and national accountability systems are eligible to receive performance bonuses based on student performance on the state’s assessments. These include third- through fifth-grade teachers; sixth- through eighth-grade math, language arts, and reading teachers; high school math and English I teachers; and curriculum facilitators and principals. The district uses the value-added data model developed by William Sanders of the SAS Institute to produce value-added measures of student achievement for individual teachers and measure student growth. Teachers whose mean student growth is one standard error above the district mean receive a $2,500 performance bonus, while those whose students’ mean growth score is 1.5 standard errors above the district mean receive a $4,000 incentive. Teachers in untested subjects are not eligible for performance bonuses through the district’s grant program, but are eligible to receive school-wide bonuses through the state’s ABC accountability program.

Researchers at the SERVE Center at the University of North Carolina at Greensboro are conducting an evaluation of the program. The evaluation consists of comparisons between Mission Possible and non-Mission Possible schools on a number of dimensions of student and teacher outcomes. The data will include student and teacher records, assessment results, interviews, and surveys. Researchers found after one year of implementation that the program schools showed reductions in teacher and principal turnover, increases in the percentage of students passing the state assessment, and improvements in Annual Yearly Progress goals obtained.

North Carolina’s ABCs school-wide bonus program

Duke University Researcher Jacob Vigdor evaluated the effect of a school-wide bonus program that has been in operation in North Carolina since the 1996-97 school year. The ABCs program pays bonuses to all teachers in a school based on students’ growth on the state assessments. Levels of bonuses vary for certified and non-certified staff and vary based on the level of student growth. All certified staff in schools that achieve “high growth” based on performance on the state’s assessment receive up to $1,500, while teacher assistants receive up to $500. All certified staff in schools that achieve “expected growth” receive up to $750, while teacher assistants receive up to $375.

Vigdor found some evidence of overall improvements in test scores. Specifically, math proficiency rates in the state increased “both on the high-stakes test used to determine bonus eligibility and on the lower-stakes National Assessment of Educational Progress exam. Reading proficiency rates have improved only on the state’s own examination.” He also found that schools did implement changes when they failed to receive a bonus. But he did not find evidence that the program closed achievement gaps. He theorizes that teachers reacted to the program by leaving disadvantaged schools where they perceived less likelihood of earning bonuses. He posits that the program should have included a measure of expected gains in the formula for determining awards, thereby not disadvantaging teachers in schools where gains are harder to achieve.

Teacher Advancement Program

Under the banner of Vanderbilt University’s National Center on Performance Incentives, researchers Matthew Springer, Dale Ballou, and Art Peng released findings in 2008 from the first independent evaluation of the Teacher Advancement Program in two undisclosed states. TAP is a comprehensive school reform model designed to attract effective teachers, improve the quality of instruction, and improve student achievement.

The study used a panel data set to compare students’ test score gains in mathematics in schools in two undisclosed states that participated in TAP with student test score gains in non-TAP schools. The authors found compelling evidence that TAP schools produce larger gains in the mathematics achievement of students in grades 2 through 5. The reported effects are statistically and educationally significant. Evidence about the effects of TAP schools for older students is less clear-cut, especially given that the study included only a small number of high schools. Nevertheless, the study surfaces the important question of whether measures of achievement gains for older students are sensitive to the level of stakes attached to the tests involved.

Texas Incentive Programs

Texas has made the largest single state investment in performance-pay programs in the country with $868.1 million over a six-year period dedicated to the development of three performance pay programs—the Governor’s Educator Excellence Grant, the Texas Educator Excellence Grant, and the District Awards for Teacher Excellence.

The state piloted performance pay programs with GEEG in 2006, providing $10 million in non-competitive three-year grants to 99 campuses. The state then expanded to TEEG, which provided $100 million per year in funding for annual grants to approximately 1,000 campuses per year. Both programs were targeted to high-performing schools that enrolled high percentages of economically disadvantaged students. The Texas Legislature replaced TEEG and GEEG with the DATE program in 2008-2009, with 203 districts participating in the first year of implementation. DATE was funded at $147.5 million the first year and $397 million over the next two years and is a non-competitive, district-level grant available to all Texas school districts. Districts can choose to implement their program district-wide or target specific campuses.

All three programs separate funding into two parts. Part I funding, which with DATE comprises at least 60 percent of a district’s award, is used to provide incentives to classroom teachers. Part I performance awards must be based on improved student performance using objective, quantifiable measures. Part II funding, which comprises 40 percent or less of a district’s award, may be used for bonuses for other school personnel, recruitment and retention stipends, professional development, teacher mentoring, building data capacity, and other purposes.

Texas has partnered with the National Center on Performance Incentives to evaluate the three programs. The center has so far evaluated the TEEG and GEEG programs and is in the process of evaluating the DATE program using randomized designs.

The first-year evaluation of the GEEG program found that the performance incentive programs appeared to be having “an encouraging impact on schools’ organizational dynamics, teachers’ perceptions of performance incentives, and teachers’ instructional practice.” Teachers viewed the program favorably. For instance, 66.8 percent of teachers either agreed or strongly agreed that the program was having beneficial effects on their school. A majority of teachers—53 percent—also reported making specific changes to their instructional practices in response to GEEG. But the authors felt that it was too soon to attribute these outcomes to the programs. It also was too soon to look at the program’s effect on student achievement and other outcomes.

The evaluations of TEEG also show a number of positive results. The most interesting finding is that the receipt of bonuses reduced teacher turnover among effective educators and increased leave rates for ineffective educators in TEEG schools. Teachers receiving the largest bonuses had a “leave rate” of approximately 2-3 percent; teachers not receiving a bonus had a “leave rate” of approximately 41-47 percent.

The program also had positive effects on school culture and instruction. Nearly 70 percent of TEEG Cycle 2 teachers reported an increase in collegiality. About half of TEEG Cycle 1 and Cycle 2 teachers reported an increase in the use of standards for their instruction. The evaluation could not conclusively connect TEEG with an increase in student achievement, but the authors were hopeful that future evaluations would reveal a correlation.

Review of research on performance pay

A recent synthesis of research also provides some promising evidence in support of performance-pay programs. Researchers Michael J. Podgursky from the University of Missouri and Matthew Springer from Vanderbilt University summarized evaluations of performance-pay programs that used a treatment-and-control design and found that all of these programs had positive effects on the outcome tied to the incentive. Podgursky and Springer concluded that “while the literature isn’t sufficiently robust to prescribe how systems should be designed—optimal size of bonuses, mix of individual versus group incentives—it is sufficiently positive to suggest that further experiments and pilot programs by districts and states are in order.”

Additional evidence will be available beginning in 2011 from the National Center on Performance Incentives, which received a five-year, $10 million grant from the U.S. Department of Education’s Institute of Education Sciences to study the effectiveness of performance incentives. One study of central interest employs a randomized experimental design to assess the causal effect of a pilot program in Nashville public schools. The program allows mathematics teachers to earn bonuses of up to $15,000 per year, conditional on their students’ gains on state exams.

Acknowledgements

This research brief was produced by Robin Chait, Raegen Miller, and Cynthia G. Brown of the Center for American Progress and Kristan Van Hook, John Gutta, and Glenn Daley of the National Institute for Excellence in Teaching.

Download this memo (pdf)

For more information on pay for performance, see:

To speak with our experts on this topic, please contact:

Print: Allison Preiss (economy, education, poverty)
202.478.6331 or apreiss@americanprogress.org

Print: Tom Caiazza (foreign policy, health care, energy and environment, LGBT issues, gun-violence prevention)
202.481.7141 or tcaiazza@americanprogress.org

Print: Chelsea Kiene (women's issues, Legal Progress, Half in Ten Education Fund)
202.478.5328 or ckiene@americanprogress.org

Spanish-language and ethnic media: Tanya Arditi (immigration, race and ethnicity)
202.741.6258 or tarditi@americanprogress.org

TV: Rachel Rosen
202.483.2675 or rrosen@americanprogress.org

Radio: Chelsea Kiene
202.478.5328 or ckiene@americanprogress.org