Center for American Progress

Moratoriums and Federal Preemption of State Artificial Intelligence Laws Pose Serious Risks
Report

Moratoriums and Federal Preemption of State Artificial Intelligence Laws Pose Serious Risks

Congressional Republicans are trying again to preempt state AI laws. Here’s what Congress’ last failed moratorium on state AI laws reveals about future efforts at preemption or moratoriums.

In this article
A view of the U.S. Capitol on September 23, 2025.
A view of the U.S. Capitol on September 23, 2025, Washington, D.C. (Getty/Anna Moneymaker)

Introduction and summary

On November 17, 2025, House Republican leaders were reported to be actively considering preemption of state AI laws, with an eye on the National Defense Authorization Act as the possible legislative vehicle.1 President Donald Trump took to Truth Social the next day to amplify the message, claiming that AI investment has made the U.S. economy the “hottest” in the world—despite current economic realities2—and that state overregulation threatens to derail this supposed growth.3 This renewed push comes on the heels of a failed attempt by Sen. Ted Cruz (R-TX)4 to insert a dangerous5 and overreaching6 moratorium on state laws regulating AI into the Senate budget reconciliation package, the Big Beautiful Bill (BBB).7 While the BBB did pass,8 senators ultimately voted 99-1 to strike the state AI law moratorium from the bill.9 The moratorium ultimately failed due to serious drafting flaws and procedural issues,10 including efforts to contort the proposal to meet Senate reconciliation rules and the use of vague language by Sen. Cruz’s team that expanded the provision’s reach11 beyond what it publicly claimed.12 Despite the removal of the moratorium from the BBB, federal preemption proposals or temporary moratoriums on state AI laws remain a critical legislative policy discussion. Such actions remain a top priority for the AI industry13 and the Trump administration, which mentions fears about “states with burdensome AI regulations” in its AI Action Plan.14 For the largest tech firms, the goal is not simply regulatory clarity but also to help shape a more permissive legal environment for AI development and deployment.15 Influential AI policy thinkers on the right also continue to refine and promote the idea of a federal moratorium.16 Critically, the proposal for a federal moratorium on state AI laws failed17 not because almost all U.S. senators oppose a ban on state AI laws but due to the circumstances of how it was crafted to meet the rules of reconciliation. Sen. Cruz has already pledged to reintroduce the proposal elsewhere and included in his September 2025 release of “A Legislative Framework for American Leadership in Artificial Intelligence,”18 alongside his proposed SANDBOX Act,19 to allow the suspension of any federal rules and regulations AI companies identify as inhibiting their ability to develop and deploy AI.

In this report, the author uses the term “reconciliation moratorium” to refer specifically to the House and Senate versions of a moratorium on state AI laws that were introduced as part of the BBB, with the understanding that if a moratorium had passed, it would have blocked most AI laws in any state it applied to.

With a legislative proposal on federal preemption or a moratorium on state AI laws likely to return, it is critical that Congress recognize the public safeguards states provide and the risk such interventions could pose to entrench industry power by undermining state regulatory authority. Rather than blocking state action, lawmakers should focus on setting a strong federal floor of protections, including prohibitions on the most dangerous uses of AI, while preserving state authority to go further in addressing new harms. At a minimum, any effort to limit state laws should be subject to extended deliberation that fully considers the tradeoffs and ensures that any preemption is paired with meaningful and enforceable federal protections.

Background on the reconciliation AI moratorium fight

The reconciliation AI moratorium was first introduced20 in the House21 as a standalone provision that would apply nationwide and included a narrow exemption for criminal laws. Sen. Ted Cruz later introduced a nearly identical version in the Senate that also applied broadly to all state and local AI laws and omitted the criminal law exemption.22 Although Cruz’s version made a passing reference to Broadband Equity, Access, and Deployment (BEAD) funding, it was still functionally standalone in its scope and effect.23 This “standalone” structure meant that the ban would take effect as binding federal law regardless of whether a state accepted the new BEAD funds. After the Senate parliamentarian raised concerns,24 Cruz revised the language25 to tie enforcement more explicitly to BEAD-related funding. However, as CAP explained at the time,26 the revised text still applied a blanket ban on state and local AI regulation and extended enforcement beyond just the $500 million addition, likely implicating the full $42.5 billion BEAD program. Continued backlash over that ambiguity led to another revision27 and a five-year compromise28 amendment cosponsored by Sen. Marsha Blackburn (R-TN).29 Sen. Blackburn ultimately opposed her amendment and the AI moratorium due to concerns about the provision’s real-world impact, stating, “This provision could allow Big Tech to continue to exploit kids, creators, and conservatives. Until Congress passes federally preemptive legislation like the Kids Online Safety Act and an online privacy framework, we can’t block states from making laws that protect their citizens.”30 Senators ultimately voted 99-1 to strike the AI moratorium31 from the BBB before passing it.32

Federal preemption—including temporary moratoriums—of state AI laws is a nightmare to craft and enforce

Proposals to block state AI laws are likely to return. Some may mirror the reconciliation moratorium by relying on temporary bans or tying restrictions to funding. However, if revived, those efforts would likely fall outside the confines of budget reconciliation legislation and require 60 votes in the Senate to pass, necessitating bipartisan cooperation. Other proposals may take a different route and pursue permanent federal preemption of state AI laws. While both approaches aim to restrict state power, they operate differently in law. Federal preemption replaces state statutes with federal ones under the supremacy clause, while a moratorium only pauses state authority for a limited time without permanently nullifying existing laws or requiring a federal standard in their place.

Federal preemption of state laws is not unprecedented and can be appropriate in some cases, but well-structured examples are narrowly tailored and often paired with strong federal protections. In climate policy, the Clean Air Act provides a model where federal standards establish a baseline and states retain the authority to adopt stronger protections.33 Several efforts in the broader tech policy space illustrate how preemption can be structured more thoughtfully. In recent years, proposed federal legislation to address data privacy—the American Data Privacy Protection Act34 and the American Privacy Rights Act35—sought to establish comprehensive federal privacy standards by preempting only certain state privacy laws36 while preserving others that reflect consumer protections and civil rights.37 Proponents of preemption of state AI laws frequently point to38 the Internet Tax Freedom Act (ITFA),39 which narrowly and clearly preempted state taxation on internet access. However, the reconciliation moratorium shared little in common with ITFA’s targeted scope. ITFA addressed a well-defined activity, taxation, with explicit guidelines and a sunset provision.

While the reconciliation moratorium was not structured as federal preemption in the traditional sense, its practical effect would have mirrored a broad preemption regime by overriding state authority across a wide range of policy domains. What made this especially dangerous is that, unlike traditional preemption frameworks, the moratorium offered no federal protections in return. It did not establish minimum standards, create oversight mechanisms, or provide any substantive safeguards for the public. Because of this, it is useful to examine the moratorium as a case study of how poorly scoped federal interventions in state authority—whether through federal preemption or temporary moratoriums—can undermine effective governance and generate legal confusion.

The reconciliation moratorium sought to broadly prevent states from regulating entire classes of AI applications across sensitive and diverse sectors such as employment, health care, housing, and elections—many of which are areas in which states have significant equities and long histories of promulgating regulations—for a period of five or 10 years. A decade-long freeze on state AI laws is also excessive by any measure. Ten years ago, there were no transformer models and no large language models, and there was no public understanding of how AI could reshape communication, labor, or information ecosystems. Since then, the field has seen the rapid rise of large-scale language models and the early development of AI agents capable of acting with increasing autonomy across digital environments. Now, leaders in the tech industry are publicly projecting that40 artificial general intelligence could emerge within the very decade that the reconciliation moratorium would have shut states out of legislating. Even the revised five-year compromise amendment41 introduced by Sens. Cruz and Blackburn, while slightly less extreme, would have still posed serious risks. That is because any attempt to suspend state authority in this fast-evolving space, whether for five years or 10, is dangerous if not paired with a federal floor of basic safeguards.

The reconciliation moratorium attempted to prohibit state and local governments from not only passing new laws or regulations that “limit,” “restrict,” or “regulate” AI but also from enforcing existing ones for the duration of the ban, while allowing those laws and regulations that “facilitate” AI. Although the exact phrasing varied slightly, the terms “regulate” and “facilitate” appeared consistently across the versions introduced in both the House and Senate. Efforts to block only restrictive AI laws while allowing those seen as supportive made the provision especially difficult to interpret, which in turn would have made consistent enforcement nearly impossible. A blanket moratorium that banned all state AI laws, whether they limit or facilitate AI, would have been more straightforward to administer. This highlights the core challenge with overriding state authority in this space because there is no approach that is both simple and neutral if it comes at the cost of eliminating the ability of states to respond to rapidly changing risks, and applies not only to future legislation but also to existing laws.

The definitions used in the reconciliation moratorium were also deeply flawed. Terms such as “AI system,” “model,” and “automated decision system” were written so broadly42 that they could apply to virtually any computational tool.43 Whether these ambiguities were intentional or accidental remains unclear. What is clear is that such ambiguous language invites lawsuits and regulatory paralysis, particularly because the reconciliation moratorium allowed private entities, not just the federal government,44 to bring enforcement actions. As a result, states would face the risk of being tied up in costly litigation for years, unable to advance legislation without legal challenge.

Even if a future AI moratorium were rewritten to limit enforcement to the federal government, it would create a different kind of barrier. States would be forced to seek clarity or approval from the government before acting, placing them in a constant position of uncertainty. This would function less like guidance and more like a de facto preclearance system, where state lawmakers hesitate to act out of concern that federal officials could later reinterpret compliance standards or that a change in administration could bring a different view of what is allowed, putting previously lawful state efforts in jeopardy.

Another key concern arises when there is no exemption for criminal laws. The House version45 of the reconciliation moratorium included a carve-out to preserve states’ ability to enforce their criminal codes, but the Senate version removed that protection. Without an exemption for criminal laws, states would be blocked from prosecuting serious AI-related crimes. For instance, Utah recently enacted H.B. 0238,46 a law that expands the definition of child sexual abuse material to explicitly include AI-generated imagery. Under the Senate’s version of the reconciliation moratorium, laws such as H.B. 0238 could have been preempted. This omission could have interfered with efforts to protect children and to address a range of AI-enabled criminal activity, including identity fraud, election interference, and the production of nonconsensual or exploitative synthetic media.

Constitutional concerns

Beyond the practical flaws of the reconciliation moratorium, a ban on state AI laws raises serious constitutional questions. Under the Tenth Amendment, powers not explicitly delegated to the federal government remain with the states. This means that states possess core authorities such as running elections, regulating state courts, setting the rules for the administration of their own state government, licensing professionals, and protecting public health and safety. The reconciliation moratorium did not carve out exceptions for these areas. Instead, it unconstitutionally imposed a sweeping prohibition that would have invalidated all forms of state AI laws that sought to restrict AI. For example, it would have prevented state legislatures from setting rules on how their own judiciaries use AI to draft opinions or review filings, an issue squarely within state control.

Many of the first laws governing AI have emerged in precisely these domains. States have already adopted laws specifically targeting AI use to ensure the integrity of their elections.47 For example, Minnesota and Texas ban48 the use of political deepfakes within certain timeframes before an election. In regulating courtroom practices, states have also taken steps to protect their courts from AI-related abuses. Texas’ 30th District Court, for instance, requires legal filings to include an “AI certificate”49 confirming that AI tools were not used to fabricate evidence or citations. The text of the reconciliation moratorium arguably would have preempted those statutes in ways that infringe on state sovereignty, removing state control over their internal operations and undermining their constitutionally protected role.

The push for a moratorium or preemption is industry-driven

Large AI companies and their trade associations50 have called for federal preemption of state AI laws in their submissions to the White House’s AI Action Plan51 as documented in the public AI Action Plan database.52 Following these requests, Big Tech companies and their trade groups53 supported and lobbied for the moratorium’s inclusion in the reconciliation bill.54 The reconciliation moratorium reflects this effort, aligning closely with industry demands.

Support for preemption or a moratorium is not limited to large companies. Some smaller AI firms, including signatories to letters submitted to the AI Action Plan,55 have voiced concerns about the cost and complexity of complying with different state laws. While these concerns may not be baseless, it would be misleading to suggest that a moratorium would primarily benefit smaller players. Large technology companies would stand to gain just as much, if not more. These larger players have the resources to manage a patchwork of state laws, but they see compliance with them as a burden that slows development and limits their freedom to deploy systems on their own terms. That is why they are likely demanding preemption not solely out of principle or concern for startup burdens, but because regulatory friction threatens their market dominance. Preemption would reduce the scrutiny they face at the state level and shield their models and practices from accountability. It is crucial, then, that in any debate over federal preemption, it is clear who is asking for it, who stands to gain, and what will be lost if states are pushed out of the AI governance space.

State leadership should be protected

The United States still lacks comprehensive federal laws governing AI and privacy. In response, states have stepped forward to begin to protect their residents. Many lawmakers acted swiftly, drawing lessons from Congress’ failure to pass meaningful privacy legislation despite years of debate. They understood that delay at the federal level often means no safeguards at all. Rather than wait, states, which have long been the laboratory of American democracy, advanced laws that promote transparency, prevent abuse, and restrict dangerous uses of AI in sectors such as employment, housing, and health care.

That same balance should guide federal AI policymaking. While the single best way to regulate AI may not yet be known, it is clear that doing nothing is not the answer.

These state efforts are critical because they allow for policy experimentation, developing best practices that can later inform federal legislation. States are uniquely positioned to respond quickly to emerging harms and to tailor policy to local needs. Across policy areas, states have taken a range of approaches that reflect their local priorities and levels of risk tolerance. Some, such as Utah, have opted for light-touch models such as regulatory sandboxes that allow innovation while monitoring outcomes. Others, such as Colorado and California, have passed more comprehensive AI accountability laws that establish transparency and oversight mechanisms.56 In climate policy, California used the authority preserved for the states under the Clean Air Act to adopt stronger protections57 and lead on vehicle emissions policy, including the Advanced Clean Cars and Advanced Clean Trucks programs. Until recently, those policies were backed by a federal waiver, which enabled California to impose stricter standards than the national baseline and have since been adopted by more than a dozen other states.58 That same balance should guide federal AI policymaking. While the single best way to regulate AI may not yet be known, it is clear that doing nothing is not the answer. Allowing multiple approaches from states to coexist increases the chances of identifying what works, what needs adjustment, and what should be scaled nationally. That flexibility is especially important in a field such as AI, where the technology evolves faster than Congress tends to legislate.

Industry has long warned about the “risk” or “costs” of a fragmented patchwork of state laws. But those same companies have often been the first to oppose serious federal proposals—particularly comprehensive privacy bills59—that would have unified standards and granted them the very preemption they now claim is necessary. The result is a landscape where the patchwork is not a byproduct of overzealous state action but a direct consequence of industry obstruction at the federal level. In that context, state leadership is not a bug but a feature of the system. The United States has long treated states as laboratories of democracy, giving them the freedom to pilot new solutions.

The result of companies opposing federal AI policymaking is a landscape where the patchwork of state laws is not a byproduct of overzealous state action but a direct consequence of industry obstruction at the federal level.

In practice, real legal conflict across states is often what prompts Congress to act. When courts reached conflicting decisions in the 1990s about liability for online platforms, that tension helped Congress create Section 23060 to resolve the conflicting issues. If state AI laws ever generate similar conflicts that create true uncertainty for national compliance, federal lawmakers will face strong pressure to intervene. That is how the system is designed to function.

Federal AI policy should build on state progress

Congress should not be focused on blocking the limited protections that currently exist at the state level, particularly since few have actually come into effect. Instead, it should focus on establishing foundational federal policies that start with prohibiting the most dangerous abuses of AI and creating guardrails for responsible use for higher-risk applications. As part of that work, Congress should learn from the states. Some of the most urgent safeguards, including prohibitions on abusive surveillance, algorithmic discrimination, and algorithmic pricing, have already begun to take shape at the state level and are likely to continue expanding as more states act. For example, New Hampshire and Oregon have passed laws banning the use of real-time remote biometric identification for surveillance in public spaces by law enforcement without a warrant,61 and five states have introduced bills to curb surveillance pricing.62 Preserving space for state experimentation and leadership is essential to building a governance model that is both protective and adaptive. Lawmakers can start by calling in state officials who have already passed AI laws to testify, offering insights into what is working and where challenges remain, something they never did when passing the reconciliation moratorium. Federal legislation to prohibit specific, high-risk uses of AI may be called for to ensure consistent protections across jurisdictions.

A comprehensive federal privacy law is also a necessary starting point, both because privacy is a fundamental human right that deserves robust protection and because privacy safeguards determine what data can be collected and used in the first place.63 AI systems are only as safe and fair as the inputs they are trained on. Without clear rules for how data are collected, shared, and governed, even the most carefully crafted AI policies will fall short.

Conclusion

The reconciliation AI moratorium proposed in the BBB may have ultimately been scuttled, but the broader effort under the Trump administration to assert federal control over AI policy is just beginning—and House Republican leaders have already confirmed they are actively seeking new avenues to impose a moratorium on state AI laws.64 Congress has an important role to play in setting national rules for AI, but that work should not start by pushing states aside. Strong federal protections are necessary and can be put in place without cutting off efforts from the states that are already underway across the country. States are moving forward with real policies that respond to real problems, and blocking those efforts would not create clarity. It would slow progress, protect industry interests, and silence the only voices trying to keep this technology accountable. Poorly crafted federal preemption and blanket moratoriums on state AI laws in the absence of federal standards are dangerous approaches. Congress should reject any push to centralize control at the expense of state power.

Endnotes

  1. Punchbowl News, “House eyeing AI preemption in NDAA, Scalise says,” November 17, 2025, available at https://punchbowl.news/article/tech/house-ai-preemption-ndaa/.
  2. John Mac Ghlionn, “The next recession is coming—and this time, you’re the collateral,” The Hill, November 17, 2025, available https://thehill.com/opinion/finance/5608486-ai-jobs-relevance-recession/.
  3. Donald Trump, @realDonaldTrump, November 18, 2025, 4:56 p.m. ET, Truth Social, available at https://truthsocial.com/@realDonaldTrump/posts/115572931492563128.
  4. U.S. Senate Committee on Commerce, Science, and Transportation, “Chairman Cruz Releases Budget Reconciliation Text,” Press release, June 5, 2025, available at https://www.commerce.senate.gov/2025/6/chairman-cruz-releases-budget-reconciliation-text.
  5. Nicole Alvarez and Adam Conner, “4 Reasons the Senate’s AI Pause Should Be Opposed,” Center for American Progress, July 1, 2025, available at https://www.americanprogress.org/article/4-reasons-the-senates-ai-pause-should-be-opposed/.
  6. Adam Conner, “The House Is Close To Passing A Moratorium on State Efforts To Regulate AI,” Center for American Progress, May 15, 2025, available at https://www.americanprogress.org/article/the-house-is-close-to-passing-a-moratorium-on-state-efforts-to-regulate-ai/.
  7. One Big Beautiful Bill Act, H.R. 1, 119th Cong., 1st sess. (July 4, 2025), available at https://www.congress.gov/bill/119th-congress/house-bill/1.
  8. Ibid.
  9. U.S. Senate Committee on Commerce, Science, and Transportation, “Senate Strikes AI Moratorium from Budget Reconciliatino Bill in Overwhelming 99-1 Vote,” Press release, Jule 1, 2025, available at https://www.commerce.senate.gov/2025/7/senate-strikes-ai-moratorium-from-budget-reconciliation-bill-in-overwhelming-99-1-vote/8415a728-fd1d-4269-98ac-101d1d0c71e0.
  10. Nicole Alvarez, “The Senate’s AI Ban Applies to Every State, Not Just BEAD Recipients,” Center for American Progress, June 13, 2025, available at https://www.americanprogress.org/article/the-senates-ai-ban-applies-to-every-state-not-just-bead-recipients/; Mackenzie Arnold and Charlie Bullock, “The AI moratorium—the Blackburn amendment and new requirements for ‘generally applicable’ laws,” Institute for Law and AI, June 29, 2025, available at https://law-ai.org/the-ai-moratorium-the-blackburn-amendment-and-new-requirements-for-generally-applicable-laws/; Adam Conner and Nicole Alvarez, “The Senate’s AI Pause May Take Billions in State Broadband Funds Hostage,” Center for American Progress, July 1, 2025, available at https://www.americanprogress.org/article/the-senates-ai-pause-may-take-billions-in-state-broadband-funds-hostage/; Charlie Bullock, “The AI Moratorium—more deobligation issues,” Institute for Law and AI, June 2025, available at https://law-ai.org/the-ai-moratorium-more-deobligation-issues/; Charlie Bullock and Mackenzie Arnold, “The AI Moratorium—deobligation issues, BEAD funding, and independent enforcement,” Institute for Law and AI, June 2025, available at  https://law-ai.org/the-ai-moratorium-deobligation-issues-bead-funding-and-independent-enforcement/.
  11. Alvarez, “The Senate’s AI Ban Applies to Every State, Not Just BEAD Recipients.”
  12. Conner and Alvarez, “The Senate’s AI Pause May Take Billions in State Broadband Funds Hostage.”
  13. Institute for Progress, “AI Action Plan Database,” available at https://www.aiactionplan.org/ (last accessed October 2025); U.S. Chamber of Commerce, “Coalition Letter to the Senate Supporting the Moratorium on AI Regulation Enforcement,” June 9, 2025, available at https://www.uschamber.com/technology/coalition-letter-to-the-senate-supporting-the-moratorium-on-ai-regulation-enforcement.
  14. The White House, “America’s AI Action Plan” (Washington: 2025), available at https://www.whitehouse.gov/wp-content/uploads/2025/07/Americas-AI-Action-Plan.pdf.
  15. Dean W. Ball, “‘Be It Enacted’: A Proposal for Federal AI Preemption,” Hyperdimensional, October 2, 2025, available at https://www.hyperdimensional.co/p/be-it-enacted.
  16. Ibid.
  17. Anthony Adragna, “Ted Cruz says his AI bill will have a moratorium on state regulations,” PoliticoPro, May 15, 2025, available at https://subscriber.politicopro.com/article/2025/05/cruz-says-his-ai-bill-will-have-a-moratorium-on-state-regulations-00352569.
  18. U.S. Senate Committee on Commerce, Science, and Transportation, “A Legislative Framework for American Leadership in Artificial Intelligence,” available at https://www.commerce.senate.gov/services/files/50958F76-A64C-418A-8FCA-650D9DE2602B (last accessed October 2025); U.S. Senate Commerce Committee on Commerce, Science, and Transportation, “Sen. Cruz Unveils AI Policy Framework to Strengthen American AI Leadership,” Press release, September 10, 2025, available at https://www.commerce.senate.gov/2025/9/sen-cruz-unveils-ai-policy-framework-to-strengthen-american-ai-leadership.
  19. The SANDBOX Act, S.2750, 119th Cong., 1st sess. (September 10, 2025), available at https://www.congress.gov/bill/119th-congress/senate-bill/2750/text.
  20. Conner, “The House Is Close To Passing A Moratorium on State Efforts To Regulate AI.”
  21. U.S. House Committee on Commerce, Science, and Transportation, “Committee Print: Title IV ‘Energy and Commerce,’” available at https://d1dth6e84htgma.cloudfront.net/Subtitle_C_Communications_4e3fbcc3bc.pdf (last accessed October 2025).
  22. U.S. Senate Committee on Commerce, Science, and Transportation, “Budget Reconciliation Text,” available at https://www.commerce.senate.gov/services/files/AD3D04CF-52B4-411F-854B-44C55ABBADDA(last accessed October 2025).
  23. Alvarez, “The Senate’s AI Ban Applies to Every State, Not Just BEAD Recipients.”
  24. Adragna, “Ted Cruz says his AI bill will have a moratorium on state regulations.”
  25. U.S. Senate Committee on Commerce, Science, and Transportation, “Budget Reconciliation Text,” available at https://www.commerce.senate.gov/services/files/A66EB681-D9B4-4247-9302-4B88B28294D2(last accessed October 2025).
  26. Conner and Alvarez, “The Senate’s AI Pause May Take Billions in State Broadband Funds Hostage.”
  27. U.S. Senate Committee on Commerce, Science, and Transportation, “Budget Reconciliation Text: Support for Artificial Intelligence Under the Broadband Equity, Access, and Deployment Program,” U.S. Senator Marsha Blackburn, available at https://www.blackburn.senate.gov/services/files/178AE7B5-7583-415E-8CF3-475241C6E5F9 (last accessed October 2025).
  28. U.S. Senate Committee on Commerce, Science, and Transportation, “Ranking Member Cantwell Says Blackburn-Cruz AI Moratorium Amendment Does Nothing to Protect Kids and Consumers,” Press release, June 30, 2025, available at https://www.commerce.senate.gov/2025/6/ranking-member-cantwell-says-blackburn-cruz-ai-moratorium-amendment-does-nothing-to-protect-kids-and-consumers.
  29. Maria Curi, “Scoop: Blackburn and Cruz reach AI pause deal,” Axios Pro Policy, June 29, 2025, available at https://www.axios.com/pro/tech-policy/2025/06/30/blackburn-cruz-reach-ai-pause-deal.
  30. Ruth Reader, “Behind the tanked AI moratorium,” Politico, July 3, 2025, available at https://www.politico.com/newsletters/future-pulse/2025/07/03/behind-the-tanked-ai-moratorium-00437679.
  31. U.S. Senate Committee on Commerce, Science, and Transportation, “Senate Strikes AI Moratorium from Budget Reconciliatino Bill in Overwhelming 99-1 Vote.”
  32. One Big Beautiful Bill Act.
  33. U.S. Environmental Protection Agency, “Clean Air Act Text,” available at https://www.epa.gov/clean-air-act-overview/clean-air-act-text (last accessed October 2025); Reema Bzeih, Sam Ricketts, and Shannon Baker-Branstetter, “States Must Lead the Way on Climate” (Washington: Center for American Progress, 2025), available at https://www.americanprogress.org/article/states-must-lead-the-way-on-climate/.
  34. American Data Privacy and Protection Act, H.R. 8152, 117th Cong., 2nd sess. (December 30, 2022), available at https://www.congress.gov/bill/117th-congress/house-bill/8152/text.
  35. American Privacy Rights Act of 2024, H.R. 8818, 118th Cong., 2nd sess. (June 25, 2024), available at  https://www.congress.gov/bill/118th-congress/house-bill/8818/text.
  36. Müge Fazlioglu, “Ceiling or floor? State law preemption and preservation in U.S. federal privacy bills,” IAPP, June 17, 2024, available at https://iapp.org/news/a/ceiling-or-floor-state-law-preemption-and-preservation-in-u-s-federal-privacy-bills.
  37. Ibid.
  38. Kevin Frazier and Adam Thierer, “1,000 AI Bills: Time for Congress to Get Serious About Preemption,” Lawfare, May 9 2025, available at https://www.lawfaremedia.org/article/1-000-ai-bills–time-for-congress-to-get-serious-about-preemption.
  39. Internet Tax Freedom Act, Public Law 435, 108th Cong., 2nd sess. (December 3, 2004), available at  https://www.congress.gov/108/plaws/publ435/PLAW-108publ435.htm.
  40. Eric Siegel, “Elon Musk Predicts Artificial General Intelligence In 2 Years. Here’s Why That’s Hype,” Forbes, April 10, 2024, available at

    https://www.forbes.com/sites/ericsiegel/2024/04/10/artificial-general-intelligence-is-pure-hype/; Lex Clips, “We might build AGI by 2026 | Dario Amodei and Lex Fridman,” YouTube, November 18, 2024, available at https://www.youtube.com/watch?v=Xywqm0vlUxk; Cecilia Kang and Cade Metz, “How Sam Altman Sidestepped Elon Musk to Win Over Donald Trump,” The New York Times, February 8, 2025, available at https://www.nytimes.com/2025/02/08/technology/sam-altman-elon-musk-trump.html.

  41. U.S. Senate Committee on Commerce, Science, and Transportation, “Budget Reconciliation Text: Support for Artificial Intelligence Under the Broadband Equity, Access, and Deployment Program.”
  42. David Brody, “The Big Beautiful Bill Could Decimate Legal Accountability for Tech and Anything Tech Touches,” Tech Policy Press, May 27, 2025, available at https://www.techpolicy.press/the-big-beautiful-bill-could-decimate-legal-accountability-for-tech-and-anything-tech-touches/.
  43. David Brody, “The AI Moratorium Could Gut State Tax Revenues and Give Fraudsters a Pass,” Tech Policy Press, June 25, 2025, available at https://www.techpolicy.press/the-ai-moratorium-could-gut-state-tax-revenues-and-give-fraudsters-a-pass/.
  44. Bullock and Arnold, “The AI Moratorium—deobligation issues, BEAD funding, and independent enforcement.”
  45. U.S. House Committee on Commerce, Science, and Transportation, “Committee Print: Title IV ‘Energy and Commerce.’”
  46. Sexual Exploitation of a Minor Amendments, Utah H.B. 0238 (March 13, 2024), available at https://legiscan.com/UT/bill/HB0238/2024.
  47. National Conference of State Legislatures, “Artificial Intelligence (AI) in Elections and Campaigns,” available at https://www.ncsl.org/elections-and-campaigns/artificial-intelligence-ai-in-elections-and-campaigns (last accessed October 2025).
  48. Ibid.
  49. The State of Texas 30th District Court, “Standing Order Regarding Use of Artificial Intelligence,” March 3, 2024, available at https://topics.txcourts.gov/LocalRulesPublic/PreviewAttachment/1866.
  50. Linda Moore, “Re: TechNet Comments on the Development of an Artificial Intelligence (AI) Action Plan,” TechNet, March 11, 2025, available at https://files.nitrd.gov/90-fr-9088/TechNet-RFI-2025.pdf; Information Technology Industry Council, “ITI Comments to OSTP RFI on an AI Action Plan,” March 15, 2025, available at https://files.nitrd.gov/90-fr-9088/AI-RFI-2025-3147.pdf; Joshua Landau, “Comments of Computer & Communications Industry Association,” Networking and Information Technology Research and Development Program, available at https://files.nitrd.gov/90-fr-9088/CCIA-AI-RFI-2025.pdf (last accessed November 2025); Tom Quaadman, “Re: Request for Information on the Development of an Artifical Intelligence (AI) Action Plan,” U.S. Chamber of Commerce, March 14, 2025, available at https://files.nitrd.gov/90-fr-9088/US-Chamber-of-Commerce-AI-RFI-2025.pdf.
  51. The White House, “Public Comment Invited on Artificial Intelligence Action Plan,” February 25, 2025, available at https://www.whitehouse.gov/briefings-statements/2025/02/public-comment-invited-on-artificial-intelligence-action-plan/; The White House, “America’s AI Action Plan.”
  52. Institute for Progress, “AI Action Plan Database”; Networking and Information Technology Research and Development Program, “Comments Received in Response to: Request for Information on the Development of an Artificial Intelligence (AI) Action Plan (‘Plan’),” available at https://www.nitrd.gov/coordination-areas/ai/90-fr-9088-responses/ (last accessed October 2025).
  53. Chase Difeliciantonio, “Trump’s allies wanted to strip states’ powers on AI. It backfired,” Politico, July 2, 2025, available at https://www.politico.com/news/2025/07/02/ai-regulation-trump-allies-state-powers-00428337.
  54. U.S. Chamber of Commerce, “Coalition Letter to the Senate Supporting the Moratorium on AI Regulation Enforcement.”
  55. Jai Ramaswamy, Collin McCune, and Matt Perault, “a16z’s Recommendations for the National AI Action Plan,” Andreessen Horowitz, March 14, 2025, available at https://a16z.com/a16zs-recommendations-for-the-national-ai-action-plan/.
  56. Transperancy in Frontier AI Act, California S.B. 53 (September 29, 2025), available at https://legiscan.com/CA/text/SB53/id/3270002; Consumer Protections for Artificial Intelligence, Colorado S.B. 205, Colorado General Assembly (May 17, 2024), available at https://leg.colorado.gov/bills/sb24-205.
  57. Bzeih, Ricketts, and Baker-Branstetter, “States Must Lead the Way on Climate.”
  58. Ibid.
  59. Nicole Alvarez, “CAP’s Response to the House Energy and Commerce Committee Privacy Working Group’s RFI,” Center for American Progress, April 21, 2025, available at https://www.americanprogress.org/article/caps-response-to-the-house-energy-and-commerce-committee-privacy-working-groups-rfi/.
  60. Electronic Frontier Foundation, “Section 230: Legislative History,” available at https://www.eff.org/issues/cda230/legislative-history?utm_source=chatgpt.com (last accessed October 2025).
  61. An Act relative to the use of artificial intelligence by state agencies, New Hampshire H.B. 1688 (July 22, 2024), available at https://legiscan.com/NH/bill/HB1688/2024.
  62. Alfred Ng, “The fight over unfair pricing goes national,” Politico, May 28, 2025, available at https://www.politico.com/news/2025/05/28/trump-surveillance-pricing-00370566; Jake Laperruque, “Status of State Laws on Facial Recognition Surveillance: Continued Progress and Smart Innovations,” Tech Policy Press, January 6, 2025, available at https://www.techpolicy.press/status-of-state-laws-on-facial-recognition-surveillance-continued-progress-and-smart-innovations/.
  63. Nicole Alvarez, “4 Protections Congress Must Include in Federal Privacy Law,” Center for American Progress, April 21, 2025, available at https://www.americanprogress.org/article/4-protections-congress-must-include-in-federal-privacy-law/.
  64. Punchbowl News, “House eyeing AI preemption in NDAA, Scalise says.”

The positions of American Progress, and our policy experts, are independent, and the findings and conclusions presented are those of American Progress alone. American Progress would like to acknowledge the many generous supporters who make our work possible.

Author

Nicole Alvarez

Senior Policy Analyst

Team

Technology Policy

Our team envisions a better internet for all Americans, advancing ideas that protect consumers, defend their rights, and promote equitable growth.

This field is hidden when viewing the form

Default Opt Ins

This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form

Variable Opt Ins

This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form
This field is hidden when viewing the form

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.