SOURCE: report cover
Interactive | Report | Findings | Recommendations | State Profiles | Methodology & Data | FAQs
Download the methodology (pdf)
Download all the data tables (xls)
The U.S. Chamber of Commerce, the Center for American Progress, and Frederick M. Hess of the American Enterprise Institute evaluated the states in eight broad categories. Here are our sources, methodology, and data for each of the categories.
Click a link below (or scroll down) to see that section’s methodology and data:
School Management
Download this data table (xls) | Return to the table of contents
To calculate a letter grade for this category, we used six indicators, with each indicator accounting for one-sixth of the total grade. We then assigned letter grades on the resulting numerical value using the following scale: 90 to 100 = A, 80 to 89 = B, 70 to 79 = C, 60 to 69 = D, and below 60 = F. If the state’s numerical value was less than .50 from the next highest grade category, we rounded up and gave that state the higher grade. For example, if a state earned a numerical value of 89.8, it earned an A and not a B. We did not use a curve. In the print edition of the report, states are ranked from highest to lowest according to their relative performance. If states had the same numerical score, they were listed alphabetically.
For the first indicator, strength of state standards, we relied on the work of the Thomas B. Fordham Institute, a Washington-based think tank, which evaluated the quality and rigor of each state’s science, math, and English standards. To calculate grades, we converted Fordham’s letter grades into numerical ones (A = 100 points, B = 85 points, C = 75 points, D = 65 points, and F = 59 points). When grading the states, the Fordham Institute used pluses and minuses in its report; we did not include those gradations in our evaluation. The Fordham Institute also did not evaluate Iowa’s academic standards, and we gave Iowa an overall grade in this category using the remaining five indicators.
For the second indicator, state sanctions low-performing schools, we used data from Editorial Projects in Education (EPE). As part of its research, the organization gave credit to a state if it had a policy to sanction all the schools in the state, not just schools receiving Title I funds. EPE did not, however, look at whether the state had actually sanctioned a school. We awarded a state 100 points if it had a policy and 59 points if it did not.
The data for the third indicator, state provides rewards to high-performing or improving schools, also came from EPE. To receive credit, the state must reward schools with extra funding based on their performance. We gave a state 100 points if it had such a program and 59 points if it did not.
For the fourth indicator, strength of charter school law, we relied on data from the Center for Education Reform, a Washington-based research and policy group, converting their grades into numerical values (A = 100 points, B = 85 points, C = 75 points, D = 65 points, and F = 59 points). We assigned states that did not have a charter school law a failing score (59 points). States without charter school laws are Alabama, Kentucky, Maine, Montana, Nebraska, North Dakota, South Dakota, Vermont, Washington, and West Virginia.
We obtained the data for the fifth indicator, percentage of teachers who say that routine duties and paperwork interfere with teaching, from the 2007-2008 Schools and Staffing Survey (SASS). A national representative survey of teachers and principals administered every four years by the National Center for Education Statistics, it asked teachers if they strongly agree, somewhat agree, somewhat disagree, or strongly disagree with the following statement: "Routine duties and paperwork interfere with my job of teaching." We examined the percentage of teachers who strongly disagree and reported the data in the tables as: Percentage of teachers who say that routine duties and paperwork do not interfere with their teaching. For grading purposes, we ranked the states on a broad curve, with the top 10 states receiving 100 points, the next 10 states receiving 85 points, and so forth.
The sixth indicator also was obtained from SASS. The information came from the teacher questionnaire, which asked teachers if they strongly agree, somewhat agree, somewhat disagree, or strongly disagree with the statement: "I like the way things are run at this school." We examined the percentage of teachers who strongly agree and reported the data in the tables as: Percentage of teachers who like the way things are run at their schools. For grading purposes, we ranked the states based on a broad curve, with the top 10 states receiving 100 points, the next 10 states receiving 85 points, and so forth.
In the school management category, we also awarded states gold stars if they had a certain policy or program. These columns are shaded gray to indicate that they were not included in the calculations of the grades. For the first gold star indicator regarding expanded learning time policy, we relied on data supplied by the National Center on Time & Learning (NCTL). For a state to receive credit from NCTL, the program had to meet a number of criteria, including that all children in a school must participate and that the initiative must focus on redesigning the school day or school year instead of simply tacking on extra hours.
For the second gold star indicator, charter school accountability, we analyzed data from the Center for Education Reform. We awarded a state a gold star if it met two criteria: (1) it had more than 250 charter schools, and (2) more than 15% of its charter schools had been shut down. To be sure, not all of the charter schools were shut down by authorizers for poor academic performance. According to the Center for Education Reform, many were closed, or closed by themselves, for fiscal or management issues. But we believe it to be appropriate to highlight the states that have both created the conditions for a vibrant charter sector and taken care to create an environment in which failing or mismanaged schools shut their doors. Of course, shuttering more schools is not ipso facto a good thing, and quality control can certainly reach a point where it is squelching the charter sector. However, it is our judgment that no state is anywhere close to that point, and thus it is worth acknowledging those states where quality control is being taken seriously.
SOURCES
Strength of state standards: The State of State Standards, 2006, Chester E. Finn, Jr., Liam Julian, and Michael J. Petrilli. Thomas B. Fordham Institute, August 2006.
State sanctions low-performing schools: Education Counts, Editorial Projects in Education, 2008, http://www.edcounts.org/.
State provides rewards to high-performing or improving schools: Ibid.
Strength of charter school law: Race to the Top for Charter Schools, The Center for Education Reform, June 2009.
Percentage of teachers who say that routine duties and paperwork do not interferewith their teaching Schools and Staffing Survey, National Center for Education Statistics, U.S. Department of Education, Teacher Data File, 2007-08.
Percentage of teachers who like the way things are run at their schools: Ibid.
State has expanded learning time policy: National Center for Time & Learning, August 2009.
Charter school accountability: Original analysis by authors using data from The Accountability Report: Charter Schools, The Center for Education Reform, February 2009.
Finance
Download this data table (xls) | Return to the table of contents
To calculate a grade for this category, we averaged together five indicators, with each indicator accounting for one-fifth of the total grade. We then assigned letter grades on the resulting numerical value using the following scale: 90 to 100 = A, 80 to 89 = B, 70 to 79 = C, 60 to 69 = D, and below 60 = F. If the state’s numerical value was less than .50 from the next highest grade category, we rounded up and gave that state the higher grade. For example, if a state earned a numerical value of 89.8, it earned an A and not a B. We did not use a curve. In the print edition of the report, states are ranked from highest to lowest according to their relative performance. If states had the same numerical score, they were listed alphabetically.
For the first indicator, authority over teacher pay, we relied on data collected by the National Council on Teacher Quality (NCTQ), a Washington-based research and policy organization. NCTQ evaluated the states against a number of criteria, including whether states set minimum salary schedules. If a state earned credit from NCTQ, we awarded it 100 points and 59 points if it did not. NCTQ reported that Colorado gave school districts an option of a salary schedule, a performance pay policy, or a combination of both, and we gave Colorado credit in our tables. Rhode Island, however, required that local district salary schedules are based on years of service, experience, and training, and therefore, we did not give Rhode Island credit.
The second indicator looked at teacher performance pay programs, and we relied again on data from NCTQ. We gave credit to a state only if it had a performance pay program that connected performance to evidence of student achievement and was open to all teachers in the state. If a state had such a program, we awarded it 100 points and 59 points if it did not.
For the third indicator, online accessibility of state finance data, we surveyed each state department of education’s Web site to see whether it made finance data easily available. Specifically, a researcher searched for total expenditure per district, total revenue per district, average teacher salary at the district level, and per pupil expenditure at the district level. The researcher used a generalized path through the state’s Web site, first visiting any web pages devoted to school finance and then looking through the navigation bar for any links to statistics or annual reports.
To simulate the experience of a parent or taxpayer, the researcher limited herself to 10 minutes on the Web site and did not use the site’s internal search function. The researcher surveyed the Web sites between June 2 and June 16, 2009, and we based our grades on the number of pieces of finance data that the researcher discovered on the site. A state received an A (100 points) if the researcher located all four pieces of finance data; a B (85 points) if the researcher found only three pieces of data; a C (75 points) if the researcher located only two pieces of data; a D (65 points) if the researcher only found one piece of data; and an F (59 points) if none of the data were found.
For the fourth indicator, simplicity of state funding mechanism, we examined the budgets of all 50 states and the District of Columbia and then graded each state on the number of line item expenditures. To collect the information, we began by visiting each state’s Web site to obtain the fiscal year 2010 budget. In the majority of cases, we obtained the budget from the governor’s office. If the state did not have an executive budget office, we used the budget provided by the state legislature. If possible, we used the ratified fiscal year 2010 budget. However, if the state had not yet ratified its 2010 budget, we used actual fiscal year 2009 expenditures as they appeared in the proposed budget for fiscal year 2010. If that data were not available, we used the state’s fiscal year 2009 budget.
We conducted the study from June 16 to June 23, 2009, and limited our search to the education section of the state budget. We did not include expenditures that had been earmarked for the Department of Education, base school funding, or base teacher salaries. We also did not count line items that received no funds. We then graded the states on a broad curve, so that if a state had 10 or fewer line items, we gave it an A (100 points). If the state had between 11 and 20 line items, it received a B (85 points). If the state had between 21 and 40 line items, it received a C (75 points). If the state had between 41 and 75 line items, we gave it a D (65 points). And if a state had more than 75 line items, it received an F (59 points).For the fifth indicator regarding school budgets, we drew on the 2007–2008 SASS. The data came from the principal questionnaire, which asked, “How much actual influence do you think each group or person has on decisions concerning the following activities … deciding how your school budget will be spent?” Principals ranked their influence on a scale: minor influence, moderate influence, major influence, and no influence. We reported the percentage of principals who indicated major amount of influence.
In the finance category, we also listed states that received gold stars for having a studentbased funding policy. We relied on unpublished data provided to us in June 2009 by Michael Griffin of the Education Commission of the States. The column is shaded gray to indicate that it was not included in the calculation of the state’s grade.
SOURCES:
Districts have full authority over teacher pay: State Teacher Policy Yearbook: What States Can Do to Retain Effective New Teachers, National Council on Teacher Quality, 2008.
Teacher performance pay programs: Ibid.
Online accessibility of state finance data: Authors, June 2009.
Simplicity of state funding mechanism: Authors, June 2009.
Percentage of principals who report a major amount of influence over the school budget: Schools and Staffing Survey, National Center for Education Statistics, U.S. Department of Education, Principal Data File, 2007-08.
State has a student-based funding policy: Michael Griffin, Education Commission of the States, State Education Funding Formulas and Grade Weighting, 2005, unpublished tabulations.
Staffing: Hiring & Evaluation
Download this data table (xls) | Return to the table of contents
To calculate a letter grade for this category, we averaged together eight indicators, with each indicator accounting for one-eighth of the total grade. We then assigned letter grades on the resulting numerical value using the following scale: 90 to 100 = A, 80 to 89 = B, 70 to 79 = C, 60 to 69 = D, and below 60 = F. If the state’s numerical value was less than .50 from the next highest grade category, we rounded up and gave that state the higher grade. For example, if a state earned a numerical value of 89.8, it earned an A and not a B. We did not use a curve. In the print edition of the report, states are ranked from highest to lowest according to their relative performance. If states had the same numerical score, they were listed alphabetically.
For the first and second indicator regarding basic skills and subject-knowledge teacher tests, we relied on data from Editorial Projects in Education. If a state had such an exam, we awarded it 100 points. If the state did not have an exam, it received 59 points. For the third indicator, strength of teacher evaluations, we relied on data from the National Council on Teacher Quality (NCTQ), which evaluated state teacher evaluation policies against a number of criteria, including whether instructional effectiveness was the preponderant criterion and whether the evaluations included objective measures of student learning. NCTQ measured the state’s performance against specific policy goals, and we converted that information into letter grades so that if a state fully met NCTQ’s goal, it received an A (100 points), if a state nearly met NCTQ’s goal, it received a B (85 points), and so forth.
The fourth indicator, strength of alternative certification, also relied on data from NCTQ, which analyzed state alternative certification programs against a number of criteria including admissions standards, support for new teachers, and the degree to which states held the programs accountable. NCTQ provided each state with a letter grade, and to calculate our grades, we converted that information into numerical values (A = 100 points, B = 85 points, C = 75 points, D = 65 points, and an F = 59 points).
For the fifth indicator, percentage of alternatively certified teachers, we drew on the 2007–2008 Schools and Staffing Survey. The data came from the teacher questionnaire, which asked teachers if they entered the profession through an alternative certification program, which was defined as “a program that was designed to expedite the transition of nonteachers to a teaching career, for example, a state, district, or university alternative certification program.” We then converted the data into numerical values based on a broad curve: the top 10 states received 100 points, the next 10 states received 85 points, and so forth.
We created the sixth indicator, national programs to recruit nontraditional teachers, by compiling an index of three variables: (1) states with the largest relative number of Troops to Teachers (TTT) hires, (2) states in which The New Teacher Project (TNTP) is present, and (3) states in which Teach for America (TFA) is present. We choose these organizations because they have demonstrated a strong record, validated by independent research, of bringing nontraditional applicants into the classroom, and we gave credit to states that have partnered with these organizations or created the conditions that have allowed these programs to flourish in their state.
To calculate the states with the largest relative number of Troops to Teachers hires, wedivided the number of teachers hired through the program from 1992 to June 2009 by thetotal number of teachers in the state for the 2006–2007 school year. Troops to Teachersprovided us with the data in 2009. We obtained the number of teachers in the state fromthe National Center for Education Statistics’ State Nonfiscal Survey of PublicElementary/Secondary Education.
We then ranked the states on a broad curve: the top ten states received 100 points, the next 10 state received 85 points, and so forth. We relied on data from TNTP and TFA to give credit (100 points) to states in which those organizations operated and 59 points if they did not. We then assigned letter grades on the resulting numerical value using the following scale: 90 to 100 = A, 80 to 89 = B, 70 to 79 = C, 60 to 69 = D, and below 60 = F. If the state’s numerical value was less than .50 from the next highest grade category, we rounded up and gave that state the higher grade. For example, if a state earned a numerical value of 89.8, it earned an A and not a B.
For the seventh indicator regarding principals’ influence over teacher hiring, we relied on the 2007–2008 Schools and Staffing Survey. The data came from the principal questionnaire, which asked, “How much actual influence do you think each group or person has on decisions concerning the following activities … hiring new full-time teachers?” Principals then ranked their influence along a scale: minor influence, moderate influence, major influence, and no influence. We reported the percentage of principals who indicated a major amount of influence.
The eighth indicator looked at interstate portability requirements and relied on data gathered by the National Council on Teacher Quality (NCTQ). As part of its evaluation of interstate portability requirements, NCTQ judged the states against specific policy goals such as the degree to which the state required transcript analyses and testing of out-of- state teachers. We converted that data into letter grades. If a state fully met NCTQ’s goal, it received an A (100 points), and if a state nearly met NCTQ’s goal, it received a B (85 points), and so forth.
In the final indicator on nontraditional administrators, we gave gold stars to states in which New Leaders for New Schools (NLNS) had been granted accreditation to prepare administrative candidates for licensure. While NLNS works with a variety of states and districts to help them recruit, train, and support outstanding school leaders, we gave credit only to states in which NLNS has become an approved program to propose candidates for state certification. This column is shaded gray to indicate that it was not included in the calculation of the final grades.
SOURCES
State requires teachers to pass basic skills test: Education Counts, Editorial Projects in Education, 2008, http://www.edcounts.org/.
State requires teachers to pass subject-knowledge tests: Ibid.
Strength of teacher evaluations: State Teacher Policy Yearbook: What States Can Do to Retain Effective New Teachers, National Council on Teacher Quality, 2008.
Strength of alternative certification: State Teacher Policy Yearbook: Progress on Teacher Quality, National Council on Teacher Quality, 2007.
Percentage of alternatively certified teachers: Schools and Staffing Survey, National Center for Education Statistics, U.S. Department of Education, Teacher Data File, 2007-08.
National programs to recruit nontraditional teachers: Original index compiled from data reported from: Troops to Teachers, 2009; State Nonfiscal Survey of Public Elementary/Secondary Education, National Center for Education Statistics, U.S. Department of Education, 2007; The New Teacher Project, June 2009; Teach for America, June 2009.
Percentage of principals who report a major influence over teacher hiring: Schools and Staffing Survey, National Center for Education Statistics, U.S. Department of Education, Principal Data File, 2007-08.
Strength of state’s interstate portability requirements: State Teacher Policy Yearbook: Progress on Teacher Quality, National Council on Teacher Quality, 2007.
National programs to recruit nontraditional administrators: New Leaders for New Schools, 2009.
Staffing: Removing Ineffective Teachers
Download this data table (xls) | Return to the table of contents
In this category, we sought to measure the barriers to the removal of ineffective teachers. To do this, we analyzed the 2007–2008 Schools and Staffing Survey. The data came from the principal survey, which asked principals if they believe the following issues are a barrier to the “dismissal of poor-performing or incompetent teachers.” The percentage of principals who answered no to each question was reported at the state level for the following issues:
- Personnel policies
- Termination decisions not upheld
- Length of time required for termination process
- Effort required for documentation
- Tight deadlines for completing documentation
- Tenure
- Teacher associations or unions
- Dismissal is too stressful and/or uncomfortable for you
- Difficulty in obtaining suitable replacements
- Resistance from parents
To calculate grades, we averaged all the indicators together and then graded the states using a quintile curve: The top 10 states received As, the next 10 received Bs, the next 11 received Cs, the next 10 Ds, and the bottom 10 states Fs. In the print edition of the report, states are ranked from highest to lowest according to their relative performance. If states had the same numerical score, they were listed alphabetically.
SOURCES
Personnel policies: Schools and Staffing Survey, National Center for Education Statistics, U.S. Department of Education, Principal Data File, 2007-08.
Termination decisions not upheld: Ibid.
Length of time required for termination process: Ibid.
Effort required for documentation: Ibid.Tight deadlines for completing documentation: Ibid.
Tenure: Ibid.
Teacher associations or unions: Ibid.
Dismissal is too stressful and/or uncomfortable for you: Ibid.
Difficulty in obtaining suitable replacements: Ibid.
Resistance from parents: Ibid.
Data
Download this data table (xls) | Return to the table of contents
To calculate a grade for this category, we averaged together six indicators, with each indicator accounting for one-sixth of the total grade. We then assigned letter grades on the resulting numerical value using the following scale: 90 to 100 = A, 80 to 89 = B, 70 to 79 = C, 60 to 69 = D, and below 60 = F. If the state’s numerical value was less than .50 from the next highest grade category, we rounded up and gave that state the higher grade. For example, if a state earned a numerical value of 89.8, it earned an A and not a B. We did not use a curve. In the print edition of the report, states are ranked from highest to lowest according to their relative performance. If states had the same numerical score, they were listed alphabetically.
For the first and second indicators regarding students’ test records and teacher-identifier systems, we relied on information from the Data Quality Campaign (DQC). DQC, a national effort to encourage states to implement longitudinal data systems to improve student achievement, published the data in 2008. For the first indicator, we gave the state 100 points if it maintained student-level data that can be used to determine student progress over time. For the second variable, we awarded a state 100 points if it connected teacher performance and demographic data to student data. If a state did not have such policies, we awarded it 59 points.
On the third indicator regarding access to interactive school-level databases for analysis, we awarded a state 100 points if it had a program and 59 points if it did not. Editorial Projects in Education gathered the data and gave credit to states that had an accessible, interactive database that educators could use to sort data by demographic characteristics and create charts and graphs.
For the fourth indicator, state has a P-20 longitudinal data system, we relied on information gathered by Achieve, Inc., a Washington-based education and research organization. The group gave credit to states that have an operational data system that annually matches student data from the K–12 and postsecondary levels. We awarded a state 100 points if it had such a system and 59 points if it did not.
The fifth and sixth indicators look at college remediation rate data, and we relied on data from Achieve. To receive credit from Achieve for reporting remediation data, a state had to calculate and publish the percentage of high school graduates who are required to take a remedial reading, writing, or math course when they enter college. For a state to receive credit for factoring college remediation data into its accountability system, it must utilize the data in its high school accountability formula. We awarded a state 100 points if it had such policies and 59 points if it did not.
SOURCES
State has the ability to match individual students’ test records from year to year to measure academic growth: Data Quality Campaign, 2008.
State has a teacher-identifier system with the ability to match teachers to students: Ibid.
States provides educators with access to interactive school-level databases for analysis: Education Counts, Editorial Projects in Education, 2006, http://www.edcounts.org/.
State has a P-20 longitudinal data system: Closing the Expectations Gap 2009, Achieve, Inc., February 2009.
State publicly reports college remediation data: Ibid.
State factors college remediation data into accountability: Ibid.
Pipeline to Postsecondary
Download this data table (xls) | Return to the table of contents
To calculate a letter grade for this category, we relied on six indicators, with each indicator accounting for one-sixth of the total grade. We then assigned letter grades on the resulting numerical value using the following scale: 90 to 100 = A, 80 to 89 = B, 70 to 79 = C, 60 to 69 = D, and below 60 = F. If the state’s numerical value was less than .50 from the next highest grade category, we rounded up and gave that state the higher grade. For example, if a state earned a numerical value of 89.8, it earned an A and not a B. We did not use a curve. In the print edition of the report, states are ranked from highest to lowest according to their relative performance. If states had the same numerical score, they were listed alphabetically.
The first indicator on career- and college-ready diplomas examines the degree to which states have aligned their course requirements with college and workplace expectations. We relied on data collected by Achieve, Inc., which looks at whether students in each state needed to complete a college- and career-ready curriculum in order to graduate. Achieve gave credit to states that have raised their course requirements using one of two approaches. Some states have required students to automatically enroll in a college-ready curriculum but allowed them to opt out if their parents sign a waiver. Others have set mandatory course requirements without any opt-out provisions. Since both approaches aim to expand access to rigorous academics, we gave credit to states that had taken either strategy, and we gave a state 100 points if it had a policy in place. If a state had yet to implement the requirements, we awarded it 59 points. In its report Closing the Expectations Gap, Achieve gave credit to Louisiana for requiring a college- and careerready diploma. But in 2009, the Louisiana legislature rolled back its policy. Achieve made us aware of the change, and since Achieve no longer credits Louisiana for having such a policy, neither did we.
For the second indicator, high school exams gauge college and career readiness, we relied on data from Achieve. The organization gave credit to states such as New York that have developed their own college readiness exams as well as states such as Maine that have incorporated national college admissions exams like the SAT into their assessment systems. We awarded a state 100 points if it mandated such an exam and 59 points if it did not.
We relied on data calculated by the College Board, the New York-based research organization, for the third indicator, students passing an Advanced Placement test. The organization describes their methodology for calculating the indicator as follows: “The numerator includes each public school student in the graduating class of 2008 who earned an AP Exam score of 3 or higher on an AP Exam at any point in his or her high school years; if a student earned more than one AP Exam grade of 3 or higher, she or he was still only counted once. The denominator is simply the overall number of public school students graduating from high school in 2008, as projected in “Knocking at the College Door” (2008), Western Interstate Commission for Higher Education.” We then ranked the results on a quintile curve: The top 10 states received 100 points; the next 10 received 85 points; the next 11 received 75 points; the next 10 received 65 points; and the bottom 10 received 59 points.
For the fourth and fifth indicators that examined dual enrollment programs and workbased internships, we relied on the 2007–2008 Schools and Staffing Survey (SASS). The information comes from the school questionnaire, which asked if schools have “dual or concurrent enrollment that offers both high school and college credit funded by the school or the district” and “work-based learning or internships outside of school, in which students earn course credits for supervised learning activities that occur in paid or unpaid workplace assignments.” When the National Center for Education Statistics reports SASS data online, it requires a minimum sample size of 30. We used the same reporting guidelines for our analysis, and thus, we did not report results for the District of Columbia because its data did not meet those minimum sample size requirements.
For the final indicator regarding a standard high school diploma with a career specialization, we relied on data from Editorial Projects in Education. The organization gave credit to a state if it allowed students to earn a high school diploma with an endorsement or certification if the student completed an additional sequence of career or technical course work. We gave a state 100 points if it had such policy and 59 points if it did not.
SOURCES
State has aligned graduation requirements with college and workplace expectations:Closing the Expectations Gap 2009, Achieve, Inc., February 2009.
State has high school exams that gauge college and career readiness: Ibid.
Percentage of students in the high school class of 2008 passing an AP test: The 5th Annual AP Report to the Nation, The College Board, February 2009.
Percentage of schools reporting dual-enrollment programs: Schools and Staffing Survey, National Center for Education Statistics, U.S. Department of Education, School Data File, 2007-08.
Percentage of schools reporting work-based internships: Ibid.
State offers a standard high school diploma with a career specialization: Education Counts, Editorial Projects in Education, 2009, http://www.edcounts.org/.
Technology
Download this data table (xls) | Return to the table of contents
We used four indicators to calculate a letter grade for this category, with each indicator accounting for one-fourth of the total grade. We then assigned letter grades on the resulting numerical value using the following scale: 90 to 100 = A, 80 to 89 = B, 70 to 79 = C, 60 to 69 = D, and below 60 = F. If the state’s numerical value was less than .50 from the next highest grade category, we rounded up and gave that state the higher grade. For example, if a state earned a numerical value of 89.8, it earned an A and not a B. We did not use a curve. In the print edition of the report, states are ranked from highest to lowest according to their relative performance. We relied on Editorial Projects in Education (EPE) for all of the data in this category.
The first indicator, students per a high-speed Internet-connected computer, is the ratio of the number of students in the state divided by the number of instructional computers that are available for instruction and connected to the Internet by a T1, T3, or cable modem. We awarded 100 points if a state had 3 or fewer students per a high-speed Internet-connected computer; 85 points if the range was between 3.1 and 3.3, 75 points if the range was between 3.4 and 3.7, 65 points if the range was between 3.8 and 4.3, and 59 points if there was 4.4 or more students per a high-speed Internet-connected computer. The research firm Market Data Retrieval gathered the data and provided it to EPE.
For the second indicator, we awarded a state 100 points if it established a virtual school, which EPE defined as creating or financing an education institution where instruction is delivered over the Internet. States that do not have a statewide virtual school received 59 points. The third indicator examined whether a state offered a computer-based assessment to all students in the grade and subject in which the test is offered. EPE collected this data, and we chose to give credit to states only if the assessment was open to all students. We awarded a state 100 points if it had such a program and 59 points if it did not. On the final indicator, we gave 100 points to a state if it required teachers to demonstrate technology competence through a formal assessment in order to receive an initial teaching license and 59 points if it did not. To receive credit from EPE, the exam must be a stand-alone assessment—it cannot be a part of the teacher preparation process.
SOURCES
Students per a high-speed Internet-connected computer: Education Counts, Editorial Projects in Education, 2006, http://www.edcounts.org/.
State has established a virtual school: Education Counts, Editorial Projects in Education, 2009, http://www.edcounts.org/.
State offers computer-based assessment: Ibid.
State requires technology testing for teachers: Ibid.
Download this data table (xls) | Return to the table of contents
We did not grade the states in this category. We listed the indicators for informational purposes only. For this category, the U.S. Chamber also conducted an online survey of chambers of commerce and other state and local business leaders. The Northwest Regional Educational Laboratory, an education research organization based in Portland, Oregon, conducted the survey on behalf of the Chamber from May 18 to June 6, 2009. More than 550 chambers and state and local business leaders were invited to participate in the survey. The response rate was 40%. The Chamber excluded responses from those who did not spend at least five hours each month working on education issues.
While the survey results were not used to grade the states, information from the survey was included in the narrative text for the findings and the state reform category. Specifically, we reported the results from a question that asked business leaders about how much overall support there was in their state from elected officials for charter schools and bonuses for effective teachers. The respondents could choose from the following answers: no support, very little support, some support, and a lot of support.
The first indicator in looked at whether a state has joined the Common Core State Standards Initiative (CCSSI), led by a diverse coalition of groups that include the National Governors Association (NGA), the Council of Chief State School Officers, ACT, the College Board, and Achieve, Inc. The group is developing a set of common academic standards in English language arts and mathematics that will be shared by all 50 states. As of September 2009, Alaska and Texas had not joined the CCSSI. We relied on data from Achieve for the second indicator, state factors reliable graduation rate into its accountability system. For a state to receive credit from Achieve, it must use a four-year cohort graduation rate consistent with the NGA’s Graduation Rate Compact and recently adopted federal regulations.
For the third indicator on international assessments, we relied on data from the National Center for Education Statistics’ Highlights From TIMSS 2007: Mathematics and Science Achievement of U.S. Fourth- and Eighth-Grade Students in an International Context. In order to benchmark the performance of their students against those from other countries, two states—Massachusetts and Minnesota—participated in the Trends in International Mathematics and Science Study 2007.
For the final indicator, we relied on data supplied by Policy Innovators in Education Network, or PIE Network, which forges a common commitment to advancing equity, high learning standards, effective teaching, accountability, and public school choice.
SOURCES
State supports common standards: National Governors Association, September 2009. State factors reliable graduation rate into accountability: Closing the Expectations Gap 2009, Achieve, February 2009.
State has participated in international assessments: Gonzales, P., Williams, T., Jocelyn, L., Roey, S., Kastberg, D., & Brenwald, S. (2008). Highlights From TIMSS 2007: Mathematics and Science Achievement of U.S. Fourth- and Eighth-Grade Students in an International Context, National Center for Education Statistics, U.S. Department of Education.
Presence of Policy Innovators in Education Network: Policy Innovators in Education Network, July 2009.
Interactive | Report | Findings | Recommendations | State Profiles | Methodology & Data | FAQs
Download the methodology (pdf)
Download all the data tables (xls)