Article

Return on Educational Investment: FAQs

Frequently asked questions for "Return on Investment," a district-by-district evaluation of U.S. educational productivity.

Interactive | Report | Findings | Recommendations | Glossary | Background | Methodology and Data | FAQs

Read the full report (pdf)

What is educational productivity?

In the business world, productivity is a measure of benefit received relative to spending. This project adopts that concept to measure public school districts’ academic achievement relative to their educational spending, while controlling for cost of living, student poverty, the percentage of students in special education, and the percentage of English-language learners.

Why do you say that your evaluations should be approached with caution?

The connection between spending and achievement is complex, and our data cannot capture everything that goes into creating an efficient school system. Nor can we control for everything that’s outside a district’s control, and our adjustments for factors like poverty and students in special education are estimations and don’t account for variations in severity and type within those demographic groups. Also, some of the data reported by states and districts are unreliable; agencies occasionally use inconsistent definitions and weak data collection practices. So while we believe our results are meaningful, we caution against reading too closely into individual evaluations of districts.

Should the United States spend less on public education?

Our emphasis on educational productivity does not mean that we believe that lawmakers should spend less on education. Quite the opposite. Transforming our schools will demand both real resources and real reform, and our project is an argument for dramatically improving our nation’s school system so that dollars create results.

Why didn’t you create a single score for each district?

A single score would have masked wide variation in the rankings of districts across our three models. We produced three productivity measures because we wanted to emphasize the complexity of measuring a district’s efficiency and expose educators, policymakers, and the public to different ways of measuring educational productivity.

Did you evaluate districts against a benchmark?

No. We evaluated each district relative to the performance of other districts in the same state. That means that states with fewer districts have different evaluative cut points than states with larger numbers of districts. We believe this approach, which has been used in other education policy reports, is a fair way to evaluate within-state performance.

How did you measure achievement?

We relied on the New America Foundation’s Federal Education Budget Project, which collects data from the states on district-level student outcomes. We used this data to create an achievement index for each state by assigning each district a score. We derived the score by averaging together the percentage of students in 2008 designated proficient or above on statewide reading and math tests for fourth grade, eighth grade, and high school.

How did you measure expenditures?

We used 2008 expenditure data from the National Center for Education Statistics, the most recent year for which complete data are available. We used “current expenditures,” the preferred metric among educational leaders, which includes salaries, services, and supplies. We did not use “total expenditures,” which also includes capital expenses, because these can fluctuate dramatically from year to year and are thus unreliable for comparisons.

How did you account for differences in revenue sources?

We did not. The fiscal database produced by NCES does not track educational expenditures by specific revenue source.

How did you adjust for differences in cost of living between districts?

We used the Comparable Wage Index, a measure of regional variations in the salaries of college graduates who are not educators. Lori L. Taylor at Texas A&M University and William J. Fowler at George Mason University developed the CWI to help researchers make better comparisons across geographic areas. We used adjustments from 2005, the most recent available.

Did you adjust for enrollment or economies of scale?

Apart from excluding from our study any district serving fewer than 250 students, we did not adjust for economies of scale because it is difficult to fairly deploy such adjustments across state and district lines. There is also debate within the research community over what economies of scale say about the quality of a district’s management. But given the potential impact that size and location can have on a district’s spending, we made it easy to sort by enrollment and geography on our interactive website.

Why did you use the percentage of students at or above the “proficient” rather than “basic” level to create your achievement index?

The proficient level indicates a firm grasp of the knowledge and skills needed to succeed at grade level. Students scoring at the basic level have only partially mastered the necessary knowledge and skills.

My district scores well on standardized tests, so why does it do poorly on your Basic and Adjusted Return on Investment indexes?

We rate schools on how much academic achievement they get for each dollar spent, while controlling for factors outside a district’s control, such as cost of living and students in poverty. A district therefore received high marks on our basic and adjusted ROI indexes if it had both high achievement and low spending relative to other districts in the same state. Districts with high achievement and high spending by definition fare less well, as do districts with low achievement and low spending.

My district scores poorly on standardized tests. Can it do well on your Basic and Adjusted Return on Investment indexes?

No. School districts with low student achievement cannot get a color rating higher than orange—or just below average—on either the basic or the adjusted ROI indexes.

My district scores poorly on standardized tests, so why does it do so well on your Predicted Efficiency index evaluation?

The Predicted Efficiency Index measures whether district achievement is higher or lower than its predicted achievement given per-pupil spending and percentage of students in special programs, such as subsidized school lunches. Under this approach, a low-achieving district could get high marks if it performed better than expected.

Can I compare districts across states?

Because each state has its own student assessment program, the return-on-investment measures listed on our website are restricted to within-state comparisons of districts, and comparisons across states are not meaningful. We were able, however, to conduct a special cross-state analysis of the urban districts that participated in the Trial Urban District Assessment (TUDA). The assessment is the only source of comparable student performance data at the district level across states, and the results of that analysis are listed in the paper.

Why is my district not included in your evaluation?

We restricted our study to districts that teach kindergarten through the 12th grade and that serve more than 250 students. We also excluded districts classified as a charter school agency, state-operated institution, regional education services agency, supervisory union, or federal agency. These restrictions were to ensure that districts were comparable to one another. We also excluded districts with inadequate demographic, achievement, or expenditure data.

Why is my state not included in your evaluation?

We did not produce results for Alaska, the District of Columbia, Hawaii, Montana, and Vermont. D.C. and Hawaii have only one school district, so within-state comparisons are not possible. Montana and Vermont likewise did not have enough comparable districts for meaningful results. We excluded Alaska because we could not sufficiently adjust cost-of-living differences within the state.

Read the full report (pdf)

Interactive | Report | Findings | Recommendations | Glossary | Background | Methodology and Data | FAQs

The positions of American Progress, and our policy experts, are independent, and the findings and conclusions presented are those of American Progress alone. A full list of supporters is available here. American Progress would like to acknowledge the many generous supporters who make our work possible.