Introduction and summary
On November 17, 2025, House Republican leaders were reported to be actively considering preemption of state AI laws, with an eye on the National Defense Authorization Act as the possible legislative vehicle.1 President Donald Trump took to Truth Social the next day to amplify the message, claiming that AI investment has made the U.S. economy the “hottest” in the world—despite current economic realities2—and that state overregulation threatens to derail this supposed growth.3 This renewed push comes on the heels of a failed attempt by Sen. Ted Cruz (R-TX)4 to insert a dangerous5 and overreaching6 moratorium on state laws regulating AI into the Senate budget reconciliation package, the Big Beautiful Bill (BBB).7 While the BBB did pass,8 senators ultimately voted 99-1 to strike the state AI law moratorium from the bill.9 The moratorium ultimately failed due to serious drafting flaws and procedural issues,10 including efforts to contort the proposal to meet Senate reconciliation rules and the use of vague language by Sen. Cruz’s team that expanded the provision’s reach11 beyond what it publicly claimed.12 Despite the removal of the moratorium from the BBB, federal preemption proposals or temporary moratoriums on state AI laws remain a critical legislative policy discussion. Such actions remain a top priority for the AI industry13 and the Trump administration, which mentions fears about “states with burdensome AI regulations” in its AI Action Plan.14 For the largest tech firms, the goal is not simply regulatory clarity but also to help shape a more permissive legal environment for AI development and deployment.15 Influential AI policy thinkers on the right also continue to refine and promote the idea of a federal moratorium.16 Critically, the proposal for a federal moratorium on state AI laws failed17 not because almost all U.S. senators oppose a ban on state AI laws but due to the circumstances of how it was crafted to meet the rules of reconciliation. Sen. Cruz has already pledged to reintroduce the proposal elsewhere and included in his September 2025 release of “A Legislative Framework for American Leadership in Artificial Intelligence,”18 alongside his proposed SANDBOX Act,19 to allow the suspension of any federal rules and regulations AI companies identify as inhibiting their ability to develop and deploy AI.
In this report, the author uses the term “reconciliation moratorium” to refer specifically to the House and Senate versions of a moratorium on state AI laws that were introduced as part of the BBB, with the understanding that if a moratorium had passed, it would have blocked most AI laws in any state it applied to.
With a legislative proposal on federal preemption or a moratorium on state AI laws likely to return, it is critical that Congress recognize the public safeguards states provide and the risk such interventions could pose to entrench industry power by undermining state regulatory authority. Rather than blocking state action, lawmakers should focus on setting a strong federal floor of protections, including prohibitions on the most dangerous uses of AI, while preserving state authority to go further in addressing new harms. At a minimum, any effort to limit state laws should be subject to extended deliberation that fully considers the tradeoffs and ensures that any preemption is paired with meaningful and enforceable federal protections.
Background on the reconciliation AI moratorium fight
The reconciliation AI moratorium was first introduced20 in the House21 as a standalone provision that would apply nationwide and included a narrow exemption for criminal laws. Sen. Ted Cruz later introduced a nearly identical version in the Senate that also applied broadly to all state and local AI laws and omitted the criminal law exemption.22 Although Cruz’s version made a passing reference to Broadband Equity, Access, and Deployment (BEAD) funding, it was still functionally standalone in its scope and effect.23 This “standalone” structure meant that the ban would take effect as binding federal law regardless of whether a state accepted the new BEAD funds. After the Senate parliamentarian raised concerns,24 Cruz revised the language25 to tie enforcement more explicitly to BEAD-related funding. However, as CAP explained at the time,26 the revised text still applied a blanket ban on state and local AI regulation and extended enforcement beyond just the $500 million addition, likely implicating the full $42.5 billion BEAD program. Continued backlash over that ambiguity led to another revision27 and a five-year compromise28 amendment cosponsored by Sen. Marsha Blackburn (R-TN).29 Sen. Blackburn ultimately opposed her amendment and the AI moratorium due to concerns about the provision’s real-world impact, stating, “This provision could allow Big Tech to continue to exploit kids, creators, and conservatives. Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can’t block states from making laws that protect their citizens.”30 Senators ultimately voted 99-1 to strike the AI moratorium31 from the BBB before passing it.32
Federal preemption—including temporary moratoriums—of state AI laws is a nightmare to craft and enforce
Proposals to block state AI laws are likely to return. Some may mirror the reconciliation moratorium by relying on temporary bans or tying restrictions to funding. However, if revived, those efforts would likely fall outside the confines of budget reconciliation legislation and require 60 votes in the Senate to pass, necessitating bipartisan cooperation. Other proposals may take a different route and pursue permanent federal preemption of state AI laws. While both approaches aim to restrict state power, they operate differently in law. Federal preemption replaces state statutes with federal ones under the supremacy clause, while a moratorium only pauses state authority for a limited time without permanently nullifying existing laws or requiring a federal standard in their place.
Federal preemption of state laws is not unprecedented and can be appropriate in some cases, but well-structured examples are narrowly tailored and often paired with strong federal protections. In climate policy, the Clean Air Act provides a model where federal standards establish a baseline and states retain the authority to adopt stronger protections.33 Several efforts in the broader tech policy space illustrate how preemption can be structured more thoughtfully. In recent years, proposed federal legislation to address data privacy—the American Data Privacy Protection Act34 and the American Privacy Rights Act35—sought to establish comprehensive federal privacy standards by preempting only certain state privacy laws36 while preserving others that reflect consumer protections and civil rights.37 Proponents of preemption of state AI laws frequently point to38 the Internet Tax Freedom Act (ITFA),39 which narrowly and clearly preempted state taxation on internet access. However, the reconciliation moratorium shared little in common with ITFA’s targeted scope. ITFA addressed a well-defined activity, taxation, with explicit guidelines and a sunset provision.
While the reconciliation moratorium was not structured as federal preemption in the traditional sense, its practical effect would have mirrored a broad preemption regime by overriding state authority across a wide range of policy domains. What made this especially dangerous is that, unlike traditional preemption frameworks, the moratorium offered no federal protections in return. It did not establish minimum standards, create oversight mechanisms, or provide any substantive safeguards for the public. Because of this, it is useful to examine the moratorium as a case study of how poorly scoped federal interventions in state authority—whether through federal preemption or temporary moratoriums—can undermine effective governance and generate legal confusion.
The reconciliation moratorium sought to broadly prevent states from regulating entire classes of AI applications across sensitive and diverse sectors such as employment, health care, housing, and elections—many of which are areas in which states have significant equities and long histories of promulgating regulations—for a period of five or 10 years. A decade-long freeze on state AI laws is also excessive by any measure. Ten years ago, there were no transformer models and no large language models, and there was no public understanding of how AI could reshape communication, labor, or information ecosystems. Since then, the field has seen the rapid rise of large-scale language models and the early development of AI agents capable of acting with increasing autonomy across digital environments. Now, leaders in the tech industry are publicly projecting that40 artificial general intelligence could emerge within the very decade that the reconciliation moratorium would have shut states out of legislating. Even the revised five-year compromise amendment41 introduced by Sens. Cruz and Blackburn, while slightly less extreme, would have still posed serious risks. That is because any attempt to suspend state authority in this fast-evolving space, whether for five years or 10, is dangerous if not paired with a federal floor of basic safeguards.
The reconciliation moratorium attempted to prohibit state and local governments from not only passing new laws or regulations that “limit,” “restrict,” or “regulate” AI but also from enforcing existing ones for the duration of the ban, while allowing those laws and regulations that “facilitate” AI. Although the exact phrasing varied slightly, the terms “regulate” and “facilitate” appeared consistently across the versions introduced in both the House and Senate. Efforts to block only restrictive AI laws while allowing those seen as supportive made the provision especially difficult to interpret, which in turn would have made consistent enforcement nearly impossible. A blanket moratorium that banned all state AI laws, whether they limit or facilitate AI, would have been more straightforward to administer. This highlights the core challenge with overriding state authority in this space because there is no approach that is both simple and neutral if it comes at the cost of eliminating the ability of states to respond to rapidly changing risks, and applies not only to future legislation but also to existing laws.
The definitions used in the reconciliation moratorium were also deeply flawed. Terms such as “AI system,” “model,” and “automated decision system” were written so broadly42 that they could apply to virtually any computational tool.43 Whether these ambiguities were intentional or accidental remains unclear. What is clear is that such ambiguous language invites lawsuits and regulatory paralysis, particularly because the reconciliation moratorium allowed private entities, not just the federal government,44 to bring enforcement actions. As a result, states would face the risk of being tied up in costly litigation for years, unable to advance legislation without legal challenge.
Even if a future AI moratorium were rewritten to limit enforcement to the federal government, it would create a different kind of barrier. States would be forced to seek clarity or approval from the government before acting, placing them in a constant position of uncertainty. This would function less like guidance and more like a de facto preclearance system, where state lawmakers hesitate to act out of concern that federal officials could later reinterpret compliance standards or that a change in administration could bring a different view of what is allowed, putting previously lawful state efforts in jeopardy.
Another key concern arises when there is no exemption for criminal laws. The House version45 of the reconciliation moratorium included a carve-out to preserve states’ ability to enforce their criminal codes, but the Senate version removed that protection. Without an exemption for criminal laws, states would be blocked from prosecuting serious AI-related crimes. For instance, Utah recently enacted H.B. 0238,46 a law that expands the definition of child sexual abuse material to explicitly include AI-generated imagery. Under the Senate’s version of the reconciliation moratorium, laws such as H.B. 0238 could have been preempted. This omission could have interfered with efforts to protect children and to address a range of AI-enabled criminal activity, including identity fraud, election interference, and the production of nonconsensual or exploitative synthetic media.
Constitutional concerns
Beyond the practical flaws of the reconciliation moratorium, a ban on state AI laws raises serious constitutional questions. Under the Tenth Amendment, powers not explicitly delegated to the federal government remain with the states. This means that states possess core authorities such as running elections, regulating state courts, setting the rules for the administration of their own state government, licensing professionals, and protecting public health and safety. The reconciliation moratorium did not carve out exceptions for these areas. Instead, it unconstitutionally imposed a sweeping prohibition that would have invalidated all forms of state AI laws that sought to restrict AI. For example, it would have prevented state legislatures from setting rules on how their own judiciaries use AI to draft opinions or review filings, an issue squarely within state control.
Many of the first laws governing AI have emerged in precisely these domains. States have already adopted laws specifically targeting AI use to ensure the integrity of their elections.47 For example, Minnesota and Texas ban48 the use of political deepfakes within certain timeframes before an election. In regulating courtroom practices, states have also taken steps to protect their courts from AI-related abuses. Texas’ 30th District Court, for instance, requires legal filings to include an “AI certificate”49 confirming that AI tools were not used to fabricate evidence or citations. The text of the reconciliation moratorium arguably would have preempted those statutes in ways that infringe on state sovereignty, removing state control over their internal operations and undermining their constitutionally protected role.
The push for a moratorium or preemption is industry-driven
Large AI companies and their trade associations50 have called for federal preemption of state AI laws in their submissions to the White House’s AI Action Plan51 as documented in the public AI Action Plan database.52 Following these requests, Big Tech companies and their trade groups53 supported and lobbied for the moratorium’s inclusion in the reconciliation bill.54 The reconciliation moratorium reflects this effort, aligning closely with industry demands.
Support for preemption or a moratorium is not limited to large companies. Some smaller AI firms, including signatories to letters submitted to the AI Action Plan,55 have voiced concerns about the cost and complexity of complying with different state laws. While these concerns may not be baseless, it would be misleading to suggest that a moratorium would primarily benefit smaller players. Large technology companies would stand to gain just as much, if not more. These larger players have the resources to manage a patchwork of state laws, but they see compliance with them as a burden that slows development and limits their freedom to deploy systems on their own terms. That is why they are likely demanding preemption not solely out of principle or concern for startup burdens, but because regulatory friction threatens their market dominance. Preemption would reduce the scrutiny they face at the state level and shield their models and practices from accountability. It is crucial, then, that in any debate over federal preemption, it is clear who is asking for it, who stands to gain, and what will be lost if states are pushed out of the AI governance space.
State leadership should be protected
The United States still lacks comprehensive federal laws governing AI and privacy. In response, states have stepped forward to begin to protect their residents. Many lawmakers acted swiftly, drawing lessons from Congress’ failure to pass meaningful privacy legislation despite years of debate. They understood that delay at the federal level often means no safeguards at all. Rather than wait, states, which have long been the laboratory of American democracy, advanced laws that promote transparency, prevent abuse, and restrict dangerous uses of AI in sectors such as employment, housing, and health care.
That same balance should guide federal AI policymaking. While the single best way to regulate AI may not yet be known, it is clear that doing nothing is not the answer.
These state efforts are critical because they allow for policy experimentation, developing best practices that can later inform federal legislation. States are uniquely positioned to respond quickly to emerging harms and to tailor policy to local needs. Across policy areas, states have taken a range of approaches that reflect their local priorities and levels of risk tolerance. Some, such as Utah, have opted for light-touch models such as regulatory sandboxes that allow innovation while monitoring outcomes. Others, such as Colorado and California, have passed more comprehensive AI accountability laws that establish transparency and oversight mechanisms.56 In climate policy, California used the authority preserved for the states under the Clean Air Act to adopt stronger protections57 and lead on vehicle emissions policy, including the Advanced Clean Cars and Advanced Clean Trucks programs. Until recently, those policies were backed by a federal waiver, which enabled California to impose stricter standards than the national baseline and have since been adopted by more than a dozen other states.58 That same balance should guide federal AI policymaking. While the single best way to regulate AI may not yet be known, it is clear that doing nothing is not the answer. Allowing multiple approaches from states to coexist increases the chances of identifying what works, what needs adjustment, and what should be scaled nationally. That flexibility is especially important in a field such as AI, where the technology evolves faster than Congress tends to legislate.
Industry has long warned about the “risk” or “costs” of a fragmented patchwork of state laws. But those same companies have often been the first to oppose serious federal proposals—particularly comprehensive privacy bills59—that would have unified standards and granted them the very preemption they now claim is necessary. The result is a landscape where the patchwork is not a byproduct of overzealous state action but a direct consequence of industry obstruction at the federal level. In that context, state leadership is not a bug but a feature of the system. The United States has long treated states as laboratories of democracy, giving them the freedom to pilot new solutions.
The result of companies opposing federal AI policymaking is a landscape where the patchwork of state laws is not a byproduct of overzealous state action but a direct consequence of industry obstruction at the federal level.
In practice, real legal conflict across states is often what prompts Congress to act. When courts reached conflicting decisions in the 1990s about liability for online platforms, that tension helped Congress create Section 23060 to resolve the conflicting issues. If state AI laws ever generate similar conflicts that create true uncertainty for national compliance, federal lawmakers will face strong pressure to intervene. That is how the system is designed to function.
Federal AI policy should build on state progress
Congress should not be focused on blocking the limited protections that currently exist at the state level, particularly since few have actually come into effect. Instead, it should focus on establishing foundational federal policies that start with prohibiting the most dangerous abuses of AI and creating guardrails for responsible use for higher-risk applications. As part of that work, Congress should learn from the states. Some of the most urgent safeguards, including prohibitions on abusive surveillance, algorithmic discrimination, and algorithmic pricing, have already begun to take shape at the state level and are likely to continue expanding as more states act. For example, New Hampshire and Oregon have passed laws banning the use of real-time remote biometric identification for surveillance in public spaces by law enforcement without a warrant,61 and five states have introduced bills to curb surveillance pricing.62 Preserving space for state experimentation and leadership is essential to building a governance model that is both protective and adaptive. Lawmakers can start by calling in state officials who have already passed AI laws to testify, offering insights into what is working and where challenges remain, something they never did when passing the reconciliation moratorium. Federal legislation to prohibit specific, high-risk uses of AI may be called for to ensure consistent protections across jurisdictions.
A comprehensive federal privacy law is also a necessary starting point, both because privacy is a fundamental human right that deserves robust protection and because privacy safeguards determine what data can be collected and used in the first place.63 AI systems are only as safe and fair as the inputs they are trained on. Without clear rules for how data are collected, shared, and governed, even the most carefully crafted AI policies will fall short.
Conclusion
The reconciliation AI moratorium proposed in the BBB may have ultimately been scuttled, but the broader effort under the Trump administration to assert federal control over AI policy is just beginning—and House Republican leaders have already confirmed they are actively seeking new avenues to impose a moratorium on state AI laws.64 Congress has an important role to play in setting national rules for AI, but that work should not start by pushing states aside. Strong federal protections are necessary and can be put in place without cutting off efforts from the states that are already underway across the country. States are moving forward with real policies that respond to real problems, and blocking those efforts would not create clarity. It would slow progress, protect industry interests, and silence the only voices trying to keep this technology accountable. Poorly crafted federal preemption and blanket moratoriums on state AI laws in the absence of federal standards are dangerous approaches. Congress should reject any push to centralize control at the expense of state power.