Section 43201(c), the “Artificial Intelligence and Information Technology Modernization Initiative: Moratorium,” states:
no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.
The purpose of this provision is clear. It aims to nullify existing and future state efforts to address the harms from AI that are already proliferating or place any restrictions on AI deployment. Indeed, the proposed text includes further definitions and rules of construction, the latter of which states, “the primary purpose and effect of [the moratorium] is to remove legal impediments to, or facilitate the deployment or operation of, an artificial intelligence model, artificial intelligence system, or automated decision system.”
The few significant existing state AI laws are focused on preventing harms by promoting transparency, algorithmic fairness, and accountability. There is already ample evidence of the harms from existing AI systems, from the automated denial of health insurance claims to AI monitoring of employees, and states are considering regulating on a variety of issues. This moratorium would prevent states from banning even the most harmful uses of AI, such as any bill that proposes prohibiting the automated firing of employees by AI systems. These are real-world harms that may destroy public trust in AI systems and slow AI adoption, absent laws that can reassure the public of their safety.
The proliferation of state AI laws is entirely due to congressional inaction. Traditionally, state legislation filling the void left by the federal government has been a celebrated feature of federalism. The states have been laboratories of democracy, something celebrated by conservatives and progressives alike. Different state efforts are the best opportunity to discover the most effective AI regulations. Yet the sweeping federal moratorium on state AI laws would be premature, as few laws are already in effect, and the thousands of bills that have been proposed are far from guaranteed to pass. Moreover, the moratorium is not paired with any baseline federal AI legislation; the House is proposing to erase state protections without offering a federal replacement. The moratorium also ignores the history of early internet legislation, when Congress often moved once there was concrete evidence of emerging conflicts that needed to be resolved.
The preemption of state laws regulating AI is a top goal of Big Tech and AI companies, and this moratorium proposal offers an unprecedented giveaway to industry at a time when the president and the majority in the House of Representatives have spent years claiming that these companies are too powerful and must be held accountable. To essentially prevent all 50 states from exploring AI policy solutions at a time when Congress has not passed a significant technology regulation bill in many years is to avoid the problem and allow it spin out of control.
Far from being a dramatic congressional action, a 10-year moratorium on state AI laws would represent a great congressional inaction. It would prevent any policy development at the state level that could be adopted nationally, and it would give Congress another excuse to kick the can down the road until it is too late to pass comprehensive and necessary laws.
Congressional inaction has incentivized state action on AI
The rise of generative AI into the public consciousness pushed Congress to focus on it. Yet despite numerous bipartisan AI working groups in both chambers of the 118th Congress issuing reports on the importance of addressing AI, there have been no meaningful legislative steps. Although Congress has introduced numerous AI bills and held hearings, the 118th Congress passed no AI bills, and the 119th Congress has so far passed only one AI-related bill, the TAKE IT DOWN Act. This inaction is part of a history of congressional inaction on technology issues, which has led states to take their own actions, such as the California Consumer Privacy Act and the Illinois Biometric Information Privacy Act, in the privacy space. The same can be said of the states stepping in to regulate AI.
States as laboratories of democracy
States are the laboratories of democracy, and policy innovation comes from experimentation. For example, many AI regulation opponents have called to establish regulatory sandboxes in states that would allow experimentation and innovation in AI governance. The Institute for Progress (IFP) AI Action Plan Database, for example, categorized 30 submissions that included a recommendation to, “Establish regulatory sandboxes for testing AI innovations with temporary regulatory relief.”
In the absence of federal legislation, states are best positioned to listen to their residents and determine appropriate AI policy solutions. Unlike Congress, which is often stalled by partisan gridlock and special interest lobbying, state governments can be nimbler and more responsive to emerging technological threats. Although some state regulations may end up being ineffective or burdensome, others may prove effective and serve as models for future federal legislation. Without state regulations, Congress will have no real-world examples to draw from when crafting national AI regulation.
Concerns about a patchwork of state regulations tend not to acknowledge the reality that most interstate commerce already deals with varying state laws. And while the tech industry has claimed that a patchwork of state privacy legislation would be overly burdensome, it has also supported state privacy bills.
A federal moratorium is premature
The argument has been made that, because thousands of AI bills are pending in state legislatures, federal preemption is necessary. But anyone who works on state policy knows that thousands of bills are proposed in state legislatures every session, and most go nowhere. Big Tech and AI companies are treating every proposed state legislature bill as if it will pass, which is not a serious metric. Rather than judging the potential burden of proposed legislation, it would be more reasonable to consider the state AI laws on the books today.
Few state AI bills have passed into law, and even fewer have gone into effect. Even fewer could be credibly argued to impose significant burdens on AI developers or deployers. A quick glance at the National Conference of State Legislatures’ (NCSL) trackers for artificial intelligence legislation in 2025 and 2024 finds that most enacted or adopted AI legislation is relatively minor or the kind of legislation that AI companies would support, such as driving AI adoption or increasing AI education or workforce support. Moreover, the International Association of Privacy Professionals’ “US State AI Governance Legislation Tracker”—which tracks more substantial “cross-sectoral AI governance bills that apply to private sector organizations”—lists only five bills that have passed into law. Of those five, only one, Colorado’s S.B. 205, has been the subject of the fiercest criticism from industry and AI adoption proponents, and that bill does not even fully go into effect until February 2026.
By and large, Big Tech and AI companies complain about hypothetical future harms, and they have not demonstrated any significant regulatory burdens or conflicting court decisions that justify this moratorium. Meanwhile, today’s AI and automated decision-making systems are causing real harms—and states have taken these harms more seriously than Congress. Congress has not even examined the potential impacts of a moratorium. The House E&C Committee held no hearings before its vote approving the moratorium to discuss this stripping of state power and authority—either the moratorium itself or the state laws it would invalidate. It has not invited as witnesses state elected officials, such as state legislators who have authored the bills, or state attorneys general and governors who would be tasked with enforcement. The moratorium is opposed by the National Conference of State Legislatures and the National Association of State Chief Information Officers.
The E&C Committee is clearly aware this issue is deserving of deeper examination, as the same day that it passed the state AI moratorium it also announced a hearing for the following week titled “AI Regulation and the Future of US Leadership” that will focus on how “[b]urdensome and conflicting AI legislation stifles innovation and undermines the success of entrepreneurs.” Generally, hearings to examine the impact of potential legislation are most useful for legislators before any votes are held on that legislation. It should also be noted that each of the state legislatures that have passed AI bills passed them through their regular legislative process, with hearings that occurred before the votes, witnesses, amendments, debates, and multiple votes.
A moratorium on state AI laws, without any federal AI proposal
The House E&C’s proposed moratorium on state AI laws is not federal preemption in the traditional sense, as it does not offer alternative federal legislation to either increase AI adoption or combat AI harms. It is a massive usurping of state power without any baseline federal legislation to fill the vacuum.
Federal preemption can be an appropriate tool at times but is not a tool to be used lightly, without serious examination of the consequences. The House E&C Committee is well-aware of the complex considerations around preemption. In February 2025, the House E&C Committee Data Privacy Working Group, which is composed only of members of the majority who also crafted the bill that includes the moratorium, posted a Request for Information (RFI) with questions such as, “Given the proliferation of state requirements, what is the appropriate degree of preemption that a federal comprehensive data privacy and security law should adopt?” The committee has yet to release its review of submissions to the RFI.
The House E&C Committee, under previous leadership, held numerous privacy hearings during the past two Congresses and drafted two different versions of bipartisan bicameral federal data privacy legislation that would have preempted state privacy laws, with some exceptions, in favor of a federal standard inclusive of data minimization and enforcement options. These legislative efforts aimed to at least balance the trade-offs between innovation and consumer protections, standing in stark contrast to the current giveaway to Big Tech and AI companies.
It has been argued that events of the 1990s show that the light-touch approach used by Congress and the Clinton administration to develop the internet justifies a doubling down on AI deregulation through this state law preemption—or no regulation at all, in the case of this moratorium. But this ignores the reality that while Congress may have preempted state laws in the past, it generally did so with federal laws that had specific goals and to address real conflicts that required congressional action. For example, Section 230, which provides immunity from civil and state criminal liability for carrying or moderating third-party content, came after a series of conflicting court decisions that left websites in legal uncertainty when hosting and moderating such content. Section 230 provided federal clarity on the matter of intermediate liability that allowed for the explosion of internet companies and is considered the “Twenty-Six Words That Created the Internet.” Yet some argue that Section 230’s broad approach created both the modern internet and a culture of immunity that has incentivized some of modern technology companies’ worst abuses—so actions taken in the 1990s should serve as a cautionary tale. Such lessons argue for far more examination and analysis of the preemption of state AI laws before any congressional action.
A giveaway for Big Tech and AI companies
The most obvious motivation for the moratorium on state AI laws is that it is a top priority for Big Tech and AI companies. According to the IFP AI Action Plan Database, which analyzed submissions to Trump administration’s “AI Action Plan” RFI, 41 submissions included the recommendations IFP categorized as to, “Implement federal preemption of state AI laws to create a unified national framework.”
Specifically, Big Tech and AI companies including Google, Meta, and OpenAI have called for the federal preemption of existing and future state AI laws. In addition, industry-funded groups such as the U.S. Chamber of Commerce, the Computer & Communications Industry Association, the Information Technology Industry Council, and TechNet have called for the federal preemption of state AI laws. (CAP has previously outlined the funding relationships between these organizations and Big Tech companies). Those arguing that the moratorium is not a giveaway to Big Tech have not elaborated on how that could be true when Big Tech companies have specifically asked for the preemption of state AI laws in their requests to the Trump administration.
As CAP has written previously, President Trump and House E&C Committee leaders have declared Big Tech accountability a top priority. Therefore, it does not make sense that they would offer these companies such an unprecedented giveaway. The committee is likely aware of the poor optics of this moratorium, which is why it passed it in the dead of night, hidden inside a bill that strips health care from millions of Americans to pay for tax breaks for the wealthy.
Conclusion
AI development is moving at light-speed, and 10 years is a lifetime in the world of technology. It is hard to imagine what it will look like in a decade, for both good and ill. Preventing America’s 50 states from regulating AI, while failing to provide any federal AI legislation, is a dereliction of duty by the House E&C Committee. Americans want Congress to act on emerging problems, and when it does not, they expect the states to act. Congressional inaction cannot also punish states for action.
The author would like to thank Nicole Alvarez and Meghan Miller for their assistance in helping quickly publish this product.