Report

Subtraction by Distraction

Publishing Value-Added Estimates of Teachers by Name Hinders Education Reform

Diana Epstein explains why publicly identifying teachers with value-added estimates will actually undermine efforts to improve public schools.

The Los Angeles Unified School District's Young Oak Kim Academy, a school that teaches boys and girls in single gender core classes, including technology, math, science, and engineering, is seen on its first day of school, Wednesday, September 9, 2009, in Los Angeles. (AP/Damian Dovarganes)
The Los Angeles Unified School District's Young Oak Kim Academy, a school that teaches boys and girls in single gender core classes, including technology, math, science, and engineering, is seen on its first day of school, Wednesday, September 9, 2009, in Los Angeles. (AP/Damian Dovarganes)

Download this issue brief (pdf)

Read the full brief in your web browser (Scribd)

In August 2010 the Los Angeles Times published a special report on their website featuring performance ratings for nearly 6,000 Los Angeles Unified School District teachers. The move was controversial because the ratings were based on so-called value-added estimates of teachers’ contributions to student learning. These estimates statistically account for the different academic backgrounds children bring to teachers’ classes, but they are estimates nonetheless, and opaque ones at that. (see text box below) But the newspaper maximized the controversy—and perhaps the number of hits it drew to web pages with advertising—by attaching teachers’ names to the ratings. Parents and other interested members of the public could look up specific teachers in the database and see how they ranked in both math and English, from least effective to most effective.

As with most value-added estimates, the data were based on students’ standardized test scores, and the teachers’ rankings were in relation to their peers. Publishing these records led to a fierce debate about whether or not it was appropriate to make this kind of personnel information publicly available. That discussion continues as the newspaper recently published an updated database of value-added scores for 11,500 teachers.

A few months later, the New York City school system planned to release value-added scores for more than 12,000 teachers in response to multiple public-records requests from news organizations. The local teachers union objected, particularly since the district had originally promised to keep the information private. In January 2011 a judge ruled that the records could be released, the union appealed, and the release was halted.

Then, on August 25, the appellate judge ruled that in fact the scores—with teacher names attached—could be released. The union appealed again and in September 2011 an appellate judge ruled that the New York City Department of Education must release the names along with ratings based on value-added estimates for approximately 12,000 teachers. The timeline for compliance with the ruling depends on a threatened re-appeal by the union and on the decision by the city schools to abandon the teacher-rating system at issue in favor of a new statewide system.

In both cases, media outlets play the role of protagonist. The judge’s ruling in New York stems from a freedom of information request made by The New York Times and other press outlets, including The Wall Street Journal, New York Post, New York Daily News, and local news channel NY1. These companies purport, as the Los Angeles Times does, to serve the public interest in improving schools by publishing, by name, teacher ratings based on value-added estimates.

At first glance the idea seems to possess intuitive appeal. After all, research using value-added estimates shows that teachers are the most important school-based driver of students’ academic success. So why not turn teacher ratings based on value-added estimates into a vehicle by which interested parties, especially parents, might pressure school officials into making tough, school-improving decisions?

But the decision to publish this information is in fact not so simple. As value-added measures become an accepted component of teacher evaluations, states and school districts will increasingly have to grapple with the question of how much information should be made available to the public and how much should remain private because of the nature of the information about individual teachers. This issue brief lays out the main issues to consider and presents examples of how various states and districts are choosing to handle this very thorny subject. We also highlight distinctions between internal district value-added scores and external construction of value-added scores, the implications for their uses in both cases, and the duties and responsibilities of those computing and publishing the measures.

This issue brief argues that publicly identifying teachers with value-added estimates will actually undermine efforts to improve public schools. In short, linking names to value-added estimates subjects teachers to an open-ended set of consequences—parents lobbying principals for their children’s reassignment, for instance—but value-added estimates are not fit for any old use. In particular, they should never be used as the sole basis for informing high-stakes decisions about individual teachers, a position the Center for American Progress has long held. Our arguments bear repeating in the context of the Los Angeles and New York City dust-ups. Though we argue against publicly identifying individual teachers with value-added estimates, we do think that value-added estimates can responsibly serve the public interest in other ways.

The worst way of rating teachers

Winston Churchill observed, “Democracy is the worst form of government except all those other forms that have been tried,” and something similar can be said of value-added estimates as measures of teachers’ effectiveness. The data requirements of value-added methodology currently limit estimates to the minority of teachers, just those for whom appropriate student test score information is available. Value-added estimates inherit limitations of achievement tests, which measure students’ knowledge with error. Unmeasured factors affecting the instructional challenge facing teachers may bias or distort the estimates, which speak to just one facet of the complex work of teaching.

Despite these and other shortcomings, value-added estimates currently afford a better window on a teacher’s future impact on student learning than any other available measure. This finding clearly wins value-added estimates a place in the suite of tools that should inform workforce policies and stimulate improved teaching and learning. Yet such a Churchillian endorsement also highlights the failure of traditional performance evaluation, in which 99 percent of teachers are rated as satisfactory, to other meaningful information about teachers’ efficacy in boosting student achievement.

The charade of traditional performance evaluation is rightly under attack. Too few talented college graduates and career changers will enter teaching so long as chronically ineffective teachers are allowed to fly under the radar, and where compensation is oblivious to performance. And existing teachers cannot improve their craft without frank, specific feedback on their performance. The remedy embraced by the Center for American Progress and other groups includes folding measures of teachers’ effectiveness in boosting student achievement directly into performance evaluation.

Policymakers have embraced this idea. The Race to the Top program, a competitive federal program first funded by the American Reinvestment and Recovery Act of 2009, requires grant recipients to integrate measures of student achievement gains into teachers’ performance evaluation. Legislatures around the country—even in states with little interest or traction in competing for Race to the Top funding—have passed bills reflecting this requirement. Currently at least 16 states use some measure of student-achievement growth as a component of teachers’ performance evaluations.

Other examples from around the country

In addition to the high-profile cases in Los Angeles and New York City, a number of other districts around the country are grappling with how much, if any, information from teacher evaluations to make publicly available. Notable examples include Louisiana, where legislation has made value-added scores part of teacher evaluation but not part of the public record. School-level value-added scores will be released to the public but individual teacher scores will not be.

Similarly in Minnesota, a pending bill in the state legislature mandates that school report cards will include the number of teachers in each performance category and must be posted online. Illinois has decided that the results of teacher and principal evaluations will not be made public.

Then there’s Houston, Texas, where value-added estimates are one of the criteria that determine whether a teacher receives a performance bonus. Teacher rankings are not available to the public but the Houston Chronicle cleverly requested information on the names of the teachers who received a bonus as a way to ascertain teacher effectiveness. The request was refused by the district but the Texas state attorney overruled the decision and the names of the winners (and therefore the most effective teachers) were published.

Driven to distraction

Publicly identifying teachers with value-added estimates of their effectiveness could slow, distort, or cripple efforts to implement and refine new performance-evaluation systems. The point of evaluation reform, after all, is to alter fundamentally the composition and behavior of the teaching workforce. And this means enabling school and district officials to make decisions consistent with the strategic goals of improving student achievement overall and closing achievement gaps. High-stakes decisions at the top of the list include continued employment, tenure, compensation, and eligibility for roles carrying additional responsibility and pay.

High-stakes uses of evaluation results put a premium on fairness and validity of new performance-evaluation systems, and the use of multiple measures of performance can bolster these qualities. But placing too much weight on any one measure can easily undermine fairness or validity. One measure of performance, value-added estimates, can tether a system to the outcome-oriented goals. But a line intended for tying a boat to a dock will snap if used to hoist the boat from the water.

How much weight value-added estimates can bear without sending a nascent evaluation system crashing down is an open question. One technical reason is that the relationship between value-added estimates and other measures of performance depends on the features of the student-achievement tests involved. Better tests should support more weight, by and large. But such technical matters will be of purely academic interest if teachers are publicly identified with value-added estimates.

The theory of public service invoked in Los Angeles supposes that parents will leverage the newspaper’s website to exert pressure on school officials to make decisions differently than they otherwise would have. Some parents, for example, may request that principals move children from the classrooms of teachers with low value-added estimates to the classrooms of teachers with high value-added estimates, or that additional support staff or other resources be applied in the classrooms of teachers with low value-added estimates.

Official resistance to such pressure will be difficult in the absence of a defensible performance- evaluation system. Yet the scantest evidence of decisions based on individual value-added estimates will almost certainly undercut teachers’ willingness to engage constructively in the implementation and refinement of new performance-evaluation systems.

Thus any public association between teachers’ names and their value-added estimates will create a kind of vicious circle. Instead of tethering performance evaluation for current teachers to the goal of improving students’ academic achievement, value-added estimates will help preserve the status quo. And this in turn will discourage some highly able college graduates and career changers from tackling the challenges of teaching in public schools at all, and others from serving beyond an altruistic foray of a few years.

Big medicine

The role that value-added estimates play in the medical field offers valuable guidance for their appropriate use in public education. In health care, value-added estimates go by the grim but descriptive name “risk-adjusted mortality rates,” and publicly available tables of risk-adjusted mortality rates are commonplace. The most commonly seen tables, however, link estimates to medical centers or regions, or provide a basis for tracking an individual institution’s performance over time. Risk-adjusted mortality values are only linked publicly to individual doctors whose practice focuses on a rather specific type of medical intervention such as coronary artery bypass graft surgery.

One has a hard time finding risk-adjusted mortality associated with identifiable pediatric generalists, and for good reason. Pediatric generalists engage in a broad range of medical interventions, many of which involve only the minutest chance of a mortal outcome. This fact renders risk adjustment imprecise, and any risk-adjusted mortality values one might produce would have little bearing on most of the reasons that parents might prefer one pediatric generalist to another.

In education as in medicine, the media faces questions about what level of public disclosure is appropriate and what is responsible journalism. We believe it is irresponsible to publish teacher names with value-added estimates in the same way that it would be reckless for newspapers to publish risk-adjusted mortality rates for pediatric generalists.

Who does the analysis, and what is published?

Both the Los Angeles Times and the New York City news outlets had legitimate reasons for wanting (or needing) to release value-added scores—both could be framed as a desire to increase public information and transparency. Nonetheless, the two cases are markedly different because of where the value-added estimates are coming from.

In Los Angeles the estimates came from an independent consultant working for the Los Angeles Times, using a dataset provided by the school district in response to a request. In New York City the value-added scores were instead generated by the district itself rather than coming from an analysis separate from the district’s normal evaluation procedures. Let’s look at the larger ramifications of each approach to public disclosure in turn.

Provisions in state law

Teacher-evaluation systems are often delineated in state law. Some states mandate very detailed requirements for teacher-evaluation systems while other states leave many of the details to the discretion of individual districts. Similarly, states may or may not mandate how much information from those evaluations is available to the public. Given the primacy of state law in these matters, the first question to consider is whether the state law is prescriptive, restrictive, or flexible in stating the extent to which release of evaluation and/or value-added information is under districts’ control. For ease of reading, this brief assumes that these decisions are made at the state level, but the same questions apply equally to districts should they have responsibility for these decisions.

An important part of this discussion is whether or not teacher evaluations and/or value-added scores are part of the public record. The answer to this question can usually be found in state law or in teacher contracts. If evaluations are part of teachers’ confidential personnel files, then it may be the case that those evaluations cannot be released publicly and are not subject to Freedom of Information Act requests. Individual value-added scores may be confidential if they are used in teacher evaluation but public if they are not used in this way.

If teacher data—such as value-added scores—are not confidential, then they may be part of the public record and available to the public via public-records requests. In these situations states and districts need to consider whether the information should be released proactively on an annual basis or if the information should only be released as part of a public-records request.

It may also be the case that only some parts of the evaluation can be made available to parents and other members of the public. In some cases it may be the entire evaluation, the value-added score itself, or the performance category in which the teacher falls (least effective, average, most effective, etc.). In other cases, individual teacher information may not be publicly available but the average value-added scores for a school or for a grade level within a school are.

Even if the information can be released by the district, that does not mean the district necessarily should release it. Those in favor of release would argue that it is valuable for transparency, which may be particularly important in an endeavor such as public education, which is such a foundational element of our society. Yet publicly releasing evaluation information may put teachers on the defensive, making them less willing to use evaluation results for self-reflection and continuous improvement. It may also create tension and strife between teachers, administrators, and parents that is detrimental to student learning.

Publishing by journalists

The Los Angeles Times case presents an interesting example in which the newspaper received access to the district’s student-level database and then contracted an independent consultant to do the value-added analysis. The model used is similar to—but not exactly the same as—the model that the district is using as part of its own internal value-added analysis. It is likely that some teachers will be ranked differently in the two models, which raises the question of which model is “correct.” While teachers themselves will be able to compare their rankings from both models, the public will only have access to the results from the newspaper’s analysis. In contrast, school-level ratings from the district have been released to the public by the district.

A newspaper might responsibly use value-added scores to get the public engaged and to expose the inequitable clustering of teachers rated ineffective in certain schools or parts of a district. Some would argue that publishing evaluation information is necessary in order to ensure full transparency and accountability for what happens inside the school walls. Similarly, the responsible use of value-added scores by the press could pressure reluctant districts into computing value-added scores and using them in serious ways.

We believe that public access to district datasets should be encouraged because it is important to continue building a data-driven culture within the field of education. Limiting researchers’ access to district datasets could set the field back years in terms of knowledge generation and dissemination. This should be avoided. Researchers should continue to have access to these datasets as should journalists.

But journalists should follow the standards that researchers use at universities, think tanks, and other similar institutions. These standards include human-subjects protections, which prohibit publication of individual teacher or student names. It is not unreasonable to suggest that journalists working with the same datasets should follow the ethical guidelines long established by researchers.

Moreover, journalistic codes of ethics might be interpreted as proscribing the publication of teachers’ value-added scores by name. The Los Angeles Times’s ethics guidelines state, for instance, that “our coverage should avoid simplistic portrayals.” But what is publishing rankings based on a single, suspicious measure if not a simplistic portrayal of the relative efficacy of teachers?

Consequences of publishing value-added scores

There are three broad consequences of publishing value-added teacher performance scores. The first is the right of parents to know this information about the teachers of their children. The second is the need for the broader general public to understand the meaning of these test scores in the aggregate for the common good of a better public-education system. And the third is the risk posed to individual teachers by publishing these scores—a risk that cuts to the core of public-education reform. So let’s look at each in turn.

Parental notification

Some advocates and policymakers argue that parents have a right to know if their child is in the class of a teacher who has been identified as ineffective. Similarly, parents might want to know if their child has been placed in the class of a particularly effective teacher. If evaluation information is made public, either proactively or via public-records request, then the state could choose a passive stance and put onto parents the responsibility of obtaining this information.

In contrast, some states have decided to proactively notify parents if their child is being taught by an ineffective teacher. In Indiana a recently enacted law stipulates that parents would have to be notified if their child has an ineffective teacher for two years in a row. Florida recently passed a similar parent-notification bill, which requires parent notification if their child is in the class of a teacher who has been rated ineffective for three years in a row. In Michigan parent notification was included in a teacher-tenure reform law passed in July. Beginning in the 2015-16 school year, parents must be notified if their child is assigned to a teacher who was rated ineffective on the past two year-end evaluations.

It is too soon to tell the consequences of these steps taken by these states. But it seems likely that such notification may spark parents to demand higher-performing teachers for their children. Absent an identified pool of unemployed effective teachers, it is not clear how a school or district could respond to such demands. Furthermore, rearranging class assignments based on value-added estimates of teachers’ performance in previous years could introduce bias into the next year’s value-added estimates.

We believe that any parent notification should only be on the basis of the entire evaluation, not the value-added measures alone. This is consistent with our view that value-added measures should be a significant portion—but only one component—of an evaluation that includes multiple measures of effectiveness.

Public understanding

It is well documented that value-added scores do not represent “the truth.” As with most statistical modeling, the results are only as precise as the data itself. The models are not perfect, standardized test scores do not fully capture student learning, and measures of student growth do not represent many aspects of a teacher’s effectiveness.

Given the complexity and imprecision of the models, states should consider whether they are being responsible stewards by releasing value-added information to the general public. They must also decide how best to present documentation explaining the models and what value-added scores do and do not mean about a teacher’s effectiveness.

Regardless of how—and how much—information is released to the public, parents and other interested parties will understandably have questions. Schools, districts, and states need to build their capacity to answer questions about value-added models, which will range from the simple (“How can I find the aggregate value-added scores for my child’s school?”) to the complex (“Why have certain variables been included or excluded in the value-added model?”).

Risks to education reform

Teachers are the most important in-school factor for student achievement. It is vitally important that they are included in shaping education reform. Teachers also need to be treated as professionals, and as such their privacy should be protected. Publicly releasing teachers’ value-added scores with names attached has the potential to antagonize teachers and make them less willing to collaborate with districts and states in future reform efforts. That’s why the public’s desire for transparency should be balanced with the protections that teachers deserve to be guaranteed as valued professionals.

Refraining from publishing value-added estimates for individually identifiable teachers leaves plenty of room for appropriate uses of the estimates. For example, one use is to compute and release aggregate value-added estimates at the school level. This provides information about the quality of teaching at that school and can also highlight any issues related to the distribution of teacher talent among schools within a district or among districts within a state. There is a legitimate tension between laws excepting performance evaluation from Freedom of Information Act requests and parent notification laws, but hopefully we have made it clear that printing value-added estimates by name under the aegis of public interest in improving schools is misguided journalism. In order to safeguard education reform, states may need to consider reforming their FOIA statutes.

Conclusion

Value-added scores give us important information, so they should continue to be used as part of teacher-evaluation systems. Parents and the public have a right to transparent information about teachers, but teachers’ privacy needs to be protected. Public identification of teachers with value-added estimates will undermine efforts to improve schools by hamstringing efforts to make actual classroom performance the basis for decisions affecting the career prospects of currently practicing teachers, and by hoisting red flags of caution for college graduates and career changers inclined toward the profession.

The bottom line is this: Teachers need to be part of reforms but releasing names in this way only leads to conflict and runs counter to the need for collaboration. We note also that parent notification is a particularly tricky issue that needs considerably more thought than we were able to devote to it in this brief.

Releasing value-added scores at the school level is appropriate, however, and this could serve valuable purposes related to transparency and accountability. Districts could aggregate value-added scores and evaluations by grade, or by school, as a component of a robust accountability system that could then be folded into the requirements of state or national accountability laws. Publicly releasing such aggregate information could play an important role in documenting whether or not highly effective teachers are equitably distributed among schools in a district and among districts in a state.

If journalists attempt to do their own analyses of value-added data, they should follow the same standards that researchers do when protecting human subjects. This means that data are de-identified and individual names are never published.

Furthermore, datasets should continue to be available to researchers—whether in academic institutions or in media outlets. Such research is absolutely critical in order to develop a deeper knowledge base about value-added scores, their potential uses, and misuses that should be avoided.

Diana Epstein is a Senior Education Policy Analyst and Raegen Miller is Associate Director for Education Research at American Progress.

Download this issue brief (pdf)

Read the full brief in your web browser (Scribd)

The positions of American Progress, and our policy experts, are independent, and the findings and conclusions presented are those of American Progress alone. A full list of supporters is available here. American Progress would like to acknowledge the many generous supporters who make our work possible.

Authors

Raegen Miller

Associate Director, Education Policy