Five Problems with Dynamic Scoring
It’s a Bad Method for Estimating the Cost of Proposed Legislation
SOURCE: AP/J. Scott Applewhite
Dynamic scoring—an attempt to measure the macroeconomic effects of policy changes before they happen—continues to pop up everywhere, even in negotiations by the erstwhile Joint Select Committee on Deficit Reduction, better known as the super committee. Long a favorite tool of antitax zealots, dynamic scoring poses a number of problems that make it a poor tool for estimating the cost of proposed legislation, and the agencies tasked with making these estimates have rightly rejected it for years.
Among those who advocate this method, it is confined to revenue estimates, but it could be applied to spending as well. Fans of dynamic scoring argue that tax cuts pay for themselves, generally by spurring so much economic growth that revenues will actually increase on net. In particular, the Bush administration lobbied for the use of dynamic scoring to estimate the cost of its tax cuts, asserting that tax cuts would increase revenue enough to pay for themselves. Of course the Bush tax cuts did no such thing, instead causing our national debt to explode.
Dynamic scoring was a bad idea then and it is still a bad idea today. Here are five reasons why we shouldn’t use dynamic scoring.
Conventional revenue estimates already include behavioral responses
While some proponents of dynamic scoring explain it as an alternative to “static” standard scoring estimates, the conventional cost estimates prepared by the Congressional Budget Office, or CBO, and the Joint Committee on Taxation, or JCT, are not actually static. In estimating the budgetary effects of proposed legislation, CBO and JCT both incorporate the microeconomic behavioral effects of policy changes into their estimates. For example, when they score a gas-tax increase, they account for the reduction in gas purchases that would result.
What they don’t do is attempt to measure the macroeconomic effects—the effects a policy will have on the overall growth of the economy. As JCT explains, “estimates always take into account many likely behavioral responses by taxpayers to proposed changes in tax law … [including] shifts in the timing of transactions and income recognition, shifts between business sectors and entity form, shifts in portfolio holdings, shifts in consumption, and tax planning and avoidance.” The official JCT scores do assume that GDP will not change from the projected CBO baseline.
We cannot accurately measure the macroeconomic effects of tax changes
One problem with attempting to measure macroeconomic feedback is that estimates depend on a lot of assumptions. Broad economywide responses to tax policy changes are complex and often contradictory. This reflects the wide range of effects a tax change can have on different actors.
As an example, the Center on Budget and Policy Priorities, or CBPP, notes that reducing marginal tax rates can lead to two different behavioral responses. Increasing the after-tax compensation that a worker receives for an additional hour of work could incentivize the worker to take on additional work because the awards are greater. At the same time, increasing a worker’s take-home pay for the same hours of work could also incentivize the worker to work a fewer number of hours for the same amount of money. Which of these two effects will be larger, and by how much? The empirical record simply does not offer us a clear-cut answer to that question. The same is true of myriad other questions that dynamic scoring implicitly or explicitly raises. There is no set of accepted rules that can be applied universally to all tax-policy changes occurring in a variety of economic environments.
Even if we had clear-cut answers, there are practical limits to the level of sophistication that the estimating agencies could bring to dynamic scoring. Former CBO director Rudolph Penner describes the problem: “Consistent dynamic scoring is logistically impossible given current technology. Scoring is a hectic process. The CBO and JCT produce hundreds of scores each year. Congress always wants scores instantaneously, and analysts often work through the night to keep them happy. Dynamic scoring would force analysts to make many more judgment calls than they do today. Quality control would be difficult, and that implies a high risk that ideological biases will pollute the analysis.”
Estimates require making assumptions about future policies
Will a tax cut be paid for by spending cuts now or by taking on future debt? Macroeconomic responses may differ greatly depending on how policymakers choose to pay for the policy. Requiring budget analysts to guess how the policy will be paid for in order to score it opens up the possibility that their assumptions will influence the projected macroeconomic changes as much or even more than the policy itself. In testimony before the House Committee on Rules in 2002, CBO director Dan Crippen expressed concern that his office would be stepping into a political minefield by making these guesses: “CBO could make an assumption about what the next five Congresses and at least two presidents will do, but doing so would subject us and the results to a chorus of controversy.”
Even if dynamic scoring worked as advertised, there is evidence the effects are quite small
In 2006 a CBPP analysis of cost estimates for President Bush’s proposal to make the 2001 and 2003 tax cuts permanent found that the dynamic estimates did not differ greatly from conventional estimates. Two dynamic estimates prepared by the CBO differed by less than 4 percent from the conventional estimate. Even the Bush administration’s own estimate found that macroeconomic feedback would offset less than 10 percent of the conventionally estimated cost. There is no evidence that we are missing out on large macroeconomic effects using conventional scoring methods.
Lawmakers can pass policies regardless of their score
If Congress and the president believe a policy will have positive macroeconomic effects, nothing about conventional scoring prevents them from passing it into law. The Bush tax cuts were enacted despite their score because policymakers believed they would be good for the economy. With conventional scoring, everyone generally knows what’s included in the estimate and can make their own judgments based on that knowledge. Dynamic scoring would only introduce more obscurity to the process.
For these five reasons, CBO and JCT have rightly chosen not to include dynamic scoring in their official cost estimates. Switching to dynamic scoring would greatly reduce transparency in the revenue-estimating process. Macroeconomic forecasting is an imperfect science and the underlying evidence can be interpreted in many different ways. Using dynamic scoring would greatly pressure estimating agencies to make assumptions—assumptions that would be hard to pick out, difficult to evaluate, and likely very important at their extremes. CBO and JCT already incorporate behavioral responses into their cost estimates, and attempts to measure macroeconomic effects of the proposed policies will be fraught with inaccuracies and perceived as politically biased.
We may be able to resolve some of these problems in the future but for now there are many reasons why it doesn’t make sense to use dynamic scoring.
Sarah Ayres is a Research Associate in the Economic Policy department at American Progress.
To speak with our experts on this topic, please contact:
Print: Katie Peters (economy, education, health care, gun-violence prevention)
202.741.6285 or firstname.lastname@example.org
Print: Anne Shoup (foreign policy and national security, energy, LGBT issues)
202.481.7146 or email@example.com
Print: Crystal Patterson (immigration)
202.478.6350 or firstname.lastname@example.org
Print: Madeline Meth (women's issues, poverty, Legal Progress)
202.741.6277 or email@example.com
Print: Tanya Arditi (Spanish language and ethnic media)
202.741.6258 or firstname.lastname@example.org
TV: Lindsay Hamilton
202.483.2675 or email@example.com
Radio: Madeline Meth
202.741.6277 or firstname.lastname@example.org
Web: Andrea Peterson
202.481.8119 or email@example.com