Atypical Attitudes on AI and the Provision That Vanished From the “One Big Beautiful BILL”

Last week, AI governance practitioners and policymakers braced themselves for a federal ban that would have erased state AI laws. But, in the final hours before voting on the “One Big Beautiful Bill” ahead of the Independence Day holiday, the U.S. Senate removed a proposed moratorium on state legislation.  Since introduced a few weeks ago, the moratorium’s rise and fall revealed deep divisions not just over AI policy, but over states’ rights, federalism, and the future of tech oversight in the United States.

The Provision: A Federal Lock on State AI Laws

The original moratorium, tucked into the House’s budget reconciliation bill, would have blocked states from passing or enforcing their own AI rules for 10 years. Because of technical problems with the provision, the Senate tied the provision to federal funding: if a state attempted to regulate AI, it risked losing millions in federal broadband funding.

Support and Opposition

Supporters, largely from the tech industry, such as Microsoft, Meta, and Google, argued that a moratorium on state laws is necessary, in order to create a consistent national standard in the face of conflicting state laws. But in the absence of a comprehensive federal law or even a plan for one, opponents, including notably Anthropic’s CEO, Dario Amodei, feared the moratorium would create an enforcement vacuum and be “too blunt an instrument.”

The AI moratorium sparked a rare kind of cross-party backlash. Democrats largely opposed the moratorium. While some Republicans, like Senator Ted Cruz, endorsed the provision—calling it a necessary check on overregulation that could stifle innovation—others on the right were blindsided. Some House Republicans reported that when they voted on the initial bill, they had been unaware of the full scope of the AI provision and its reach. The conservative critique crystallized around concerns that the moratorium undercut federalism, would prove detrimental in the scope of child protection laws, and risked giving the tech industry "free rein" in a time when many Republicans are already wary of Big Tech’s power.

As criticism mounted, lawmakers attempted to soften the provision. Several senators negotiated a compromise amendment that reduced the moratorium from ten years to five. Still, it wasn’t enough, with 17 Republican governors and over 130 advocacy groups voicing opposition. Opponents argued that even a five-year freeze would undermine states’ abilities to respond to rapidly evolving AI risks.

State attorneys general also expressed concern that the moratorium could hamper enforcement of existing consumer protection laws by constricting the ability to challenge deceptive uses of AI. A letter by several state attorneys general, including California, Connecticut, and New Jersey, explained that state privacy authorities are often the first to receive consumer complaints and identify problematic practices. Moreover, removing state laws without introducing an existing federal law would be unprecedented in the constitutional doctrine of “preemption,” and, in the case of AI, would create a legal vacuum.

In a rare moment of bipartisan cooperation, Senator Marsha Blackburn (R-TN) and Senator Maria Cantwell (D-WA) co-sponsored an amendment last Monday to strip the AI provision from the bill entirely—which received the support of 99 senators and relegated the AI moratorium to history.

Legal and Policy Implications: A Cautionary Tale

Although it may look like a legislative footnote, the AI moratorium’s collapse unearthed important policy considerations. Its controversy illustrated a fundamental tension in U.S. governance: who gets to write the rules for emerging technologies? Traditionally, federalism allows states to act as “laboratories of democracy,” trying different regulatory approaches. Blocking that function, in an area as fast-moving as AI technology, raised alarms among constitutional scholars and civil liberties groups. 

Politically, the episode suggested a contradiction within an administration that is moving towards fierce protection for vulnerable victims of AI (with President Trump’s recent signing of the Take It Down Act, which addresses non-consensual intimate images) to not wanting to isolate the tech industry. Many GOP lawmakers expressed concern that the moratorium could clash with bipartisan efforts to protect children and minors from AI-generated abuse.

While several states, such as Colorado, and Texas, and Utah have passed comprehensive AI-related laws, it is only the beginning. AI regulation is no longer a niche policy issue: it’s a battleground where issues of innovation, federalism, corporate power, and public safety collide. If there’s a lesson in this failed provision, it’s that future efforts will need to be far more comprehensive, transparent, and inclusive. 


This blog post was researched and drafted by Renée Ramona Robinson, a Summer Associate at AMBART LAW pllc, under the supervision of attorney Yelena Ambartsumian.

Next
Next

A New Federal Law Criminalizing DeepFakes and Digital Exploitation: The TAKE IT DOWN Act