Update, July 1, 2025
On July 1, the Senate voted 99–1 in favor of striking the AI pause from the reconciliation bill. This bipartisan amendment vote removed the harmful AI provision before final passage. Later that day, the Senate passed the One Big Beautiful Bill Act, which sent the bill back to the House of Representatives without the AI pause.
Even if the bill is changed, this is not the moment to tie states’ hands on AI oversight, especially when states have led the way in addressing AI harms while Congress has stalled in passing comprehensive AI federal legislation. This AI pause would prevent states from enforcing laws that impose criminal penalties for AI-related harms, including in areas such as “deepfake” child sex abuse material (CSAM), as well as laws addressing AI’s use in health care and elections. States would be forced to choose between securing essential broadband funding to expand affordable, reliable internet access in unserved or underserved locations or preserving their ability to protect citizens from AI-related harms. Either way, Americans lose.
Senators should oppose the AI pause and vote for amendments to remove it from the final reconciliation bill if given the opportunity.
Senators should oppose the AI pause and vote for amendments to remove it from the final reconciliation bill if given the opportunity. This column details four reasons why it is dangerous policy.
1. The AI pause would immediately nullify most existing state AI laws
The AI pause does not just prevent new laws; it would also wipe out most existing state AI regulations for states subject to it.
Hundreds of existing AI laws are at risk
Consumer rights advocacy group Public Citizen has published a preliminary analysis identifying, as of June 26, more than 170 existing state AI laws across all 50 states that are likely to be preempted by the AI pause, with another 30-plus that may also be affected—including those addressing kids’ online safety, deepfakes, and government use of AI.
The AI pause would block enforcement of state laws regulating AI in health care
In states that adopted it, the AI pause would wipe out critical state safeguards designed to protect patients in both health insurance and clinical care settings. In Maryland, a new law requires insurers to ensure that any AI system used in utilization review operates fairly and transparently. And Texas recently passed legislation requiring health maintenance organizations and insurers to disclose when AI is used in patient care decisions. Under the AI pause, these laws and others like them would be blocked for the next 10 years.
The AI pause would block enforcement of state AI-related criminal laws
The Senate’s version of the AI pause offers no exemptions for state criminal laws, as the House version did. States could no longer enforce measures that punish AI-driven misconduct—such as Alabama’s new law making it a crime to distribute materially deceptive, AI-generated campaign media intended to mislead voters. The AI pause would likewise halt state prosecutions under deepfake CSAM laws that specifically mention AI, even though this content is not necessarily illegal under federal law. Additionally, the pause would prevent states from adopting or enforcing laws to protect residents from AI-driven scams and fraud.
Overly broad definitions would sweep in even more state laws
The bill defines both “artificial intelligence system” and “automated decision system” in sweeping terms that could encompass far more laws than states realize.
2. The AI pause blocks states from passing new AI safeguards
States subject to the AI pause would be barred from passing new laws to address emerging and evolving AI risks for the next decade.
Exceptions only favor AI deployment, not public protection
The bill’s exceptions are narrowly written to allow only state laws that make it easier to deploy AI systems or impose requirements that already exist under federal or generally applicable state law, such as common law. There are currently no comprehensive federal laws governing the use of AI.
States would be barred from addressing severe AI harms
If enacted, the AI pause would block states from banning some of the most harmful AI practices. This includes laws prohibiting AI systems that automatically fire employees without human review or that deny health insurance claims without human oversight.
See also
3. The AI pause creates a legal and compliance nightmare for states
Even states that want to comply with the AI pause would face enormous legal, administrative, and logistical burdens.
States would have to certify AI pause compliance in every BEAD report
The bill requires states to certify compliance with the AI pause in every BEAD report—initial, semiannual, and final—regardless of whether they seek any of the new $500 million in funding. This effectively turns a broadband deployment program into an ongoing AI pause compliance enforcement tool.
States may face litigation risks due to unintentional noncompliance
The bill’s vague and sweeping language increases the risk that states could unintentionally fall out of compliance, exposing them to lawsuits from private parties or enforcement actions from the NTIA. To avoid this, states would likely need to conduct a comprehensive legal inventory of all state and local laws to identify any that could trigger noncompliance. The legal compliance costs for states to continually defend against these lawsuits could be significant.
4. Changes do not fix the fundamental flaw in the AI pause
Although the bill has reportedly been revised to limit the AI pause to states that seek the new $500 million, the final text is not yet public, and it remains critical to confirm that it fully resolves the risks posed by earlier drafts affecting the full $42.45 billion BEAD program. But even if narrowed, the policy still represents an inappropriate use of infrastructure funding to push unrelated political objectives. States would be forced to choose between critical broadband funding needed to expand internet access in underserved and rural communities or their ability to protect residents from AI-related harms.
Learn more
Conclusion
The Senate’s AI pause would strip those states subject to it of their ability to protect residents from the growing, unpredictable dangers posed by AI systems. Whether it applies to $500 million or the full BEAD program, the core problem remains the same: This is not the moment to block states from enforcing or passing laws that safeguard people’s health and livelihoods. Every senator with a stake in their state’s broadband buildout or in preserving state authority to protect residents should insist that this provision be removed from the final reconciliation bill.