Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

For the first time, Washington is close to making a decision on how to regulate artificial intelligence. The battle that is brewing is not about technology, but rather about who is responsible for organizing.
In the absence of a meaningful federal AI standard focused on consumer safety, states have introduced dozens of bills to protect residents from AI-related harms, including California’s AI Safety Bill SB-53 and Texas’ Responsible AI Governance Act, which prohibits intentional misuse of AI systems.
Tech giants and emerging Silicon Valley startups claim that such laws create an unworkable patchwork that threatens innovation.
“It will slow us down in the race against China,” Josh Vlasto, co-founder of the pro-AI PAC Leading the Future, told TechCrunch.
Industry, and many more planting In the White House, they are pushing for a national standard or nothing at all. In the trenches of the all-or-nothing battle, new efforts have emerged to prevent countries from enacting their own AI legislation.
House lawmakers are reportedly trying to use the National Defense Authorization Act (NDAA) to block state AI laws. Meanwhile, a leaked draft of the White House executive order also shows strong support for pre-empting state efforts to regulate AI.
A sweeping preemption that would strip states of their rights to regulate AI is unpopular in Congress, which voted overwhelmingly against. Similar stop Earlier this year. Lawmakers have argued that without a federal standard, banning states would leave consumers vulnerable to harm, and tech companies would be free to operate without oversight.
TechCrunch event
San Francisco
|
October 13-15, 2026
To create this national standard, Rep. Ted Lieu (D-CA) and the bipartisan House AI Task Force are preparing a package of federal AI bills covering a range of consumer protections, including health care fraud, transparency, child safety, and catastrophic risk. Such a project would likely take months, if not years, to become law, highlighting why the current push to limit state power is one of the most contentious battles in AI policy.

Efforts to prevent countries from regulating artificial intelligence have intensified in recent weeks.
Majority Leader Steve Scalise (R-LA) said the House considered inserting language into the National Defense Authorization Act that would prevent states from regulating artificial intelligence. Punchbull News. Congress was reportedly working to finalize an agreement on the defense bill before Thanksgiving. Politico reported. A source familiar with the matter told TechCrunch that negotiations focused on narrowing the scope to preserve state authority in areas such as child safety and transparency.
Meanwhile, A Leaked White House EO The draft reveals a potential preventive management strategy. Ethics organization, which is reported to be It has been suspendedwould create an “AI Litigation Task Force” to challenge state AI laws in court, direct agencies to evaluate state laws deemed “onerous,” and push the FCC and FTC toward national standards that go beyond state rules.
Notably, the Ethics Office will give David Sachs – Trump’s director of artificial intelligence and cryptocurrencies and co-founder of venture capital firm Craft Ventures – joint authority in creating a unified legal framework. This would give Sachs direct influence over AI policy that replaces the typical role of the White House Office of Science and Technology Policy and its head, Michael Kratsios.
Sachs has publicly called for hindering state regulation and keeping federal oversight low, favoring industry self-regulation in order to “maximize growth.”
Sachs’ position reflects the view of much of the AI industry. Several major pro-AI political action committees have emerged in recent months, spending hundreds of millions of dollars in local and state elections to oppose candidates who support AI regulation.
Leading the future — backed by Andreessen Horowitz, OpenAI chief Greg Brockman, Perplexity, and Palantir co-founder Joe Lonsdale — has raised more than $100 million. The “Leading the Future” initiative was launched this week $10 million campaign Congress has pushed to craft a national AI policy that goes beyond state laws.
“When you’re trying to drive innovation in the tech sector, you can’t have a situation where all these laws keep coming from people who don’t necessarily have the technical expertise,” Vlasto told TechCrunch.
He said a patchwork of government regulations “will slow us down in the race against China.”
Nathan Limmer, executive director of Build American AI, the PAC’s advocacy arm, emphasized that the group supports preemption without federal consumer protections specific to AI. Existing laws, such as those dealing with fraud or product liability, are sufficient to deal with AI harms, Limmer said. While state laws often seek to prevent problems before they arise, Limmer prefers a more reactive approach: allowing companies to move quickly, and address problems in court later.

Alex Burris, a New York Assembly member running for Congress, is one of the top targets for Future Leadership. He sponsored Lifting lawwhich requires large AI laboratories to have safety plans to prevent serious damage.
“I believe in the power of AI, which is why it’s so important to have sensible regulations,” Boris told TechCrunch. “Ultimately, the AI that wins in the market will be trustworthy AI, and the market often undervalues or creates weak short-term incentives to invest in safety.”
Boris supports a national AI policy, but believes countries can move faster to address emerging risks.
It is true that countries are moving faster.
As of November 2025, there are 38 states adopted More than 100 AI-related laws have been passed this year, mainly targeting deepfakes, transparency, disclosure, and government use of AI. (accident He studies I found that 69% of these laws impose no requirements on AI developers at all.)
Activity in Congress provides further evidence for the “slower than the states” argument. Hundreds of AI bills have been introduced, but only a few have passed. Since 2015, Rep. Liu has submitted 67 bills to the House Science Committee. Only one became law.
More than 200 representatives signed Open letter Opposed pre-emptive strike in the National Defense Authorization Act, arguing that “states serve as laboratories for democracies” that must “retain the flexibility to meet new digital challenges as they emerge.” Nearly 40 state attorneys general He also sent an open letter Opposing state bans on regulating artificial intelligence.
Cybersecurity expert Bruce Schneier and data scientist Nathan E. Sanders – authors Remaking democracy: How artificial intelligence will change our politics, government, and citizenship He argues that the patchwork complaint is exaggerated.
They point out that AI companies already adhere to stricter EU regulations, and most industries are finding a way to operate under various state laws. The real motive, they say, is to avoid accountability.
Liu is drafting a massive project of more than 200 pages that he hopes to submit in December. It covers a range of issues, such as Fraud penalties, Protection against deep fakesprotection of whistleblowers, Resource calculation For academia, mandatory testing and disclosure for large companies in the field of language models.
This last provision requires AI labs to test their models and publish the results – something most of them now do voluntarily. Liu has not yet introduced the bill, but he said it does not direct any federal agencies to review AI models directly. This is different from similar invoice Introduced by Senators Josh Hawley (R-MS) and Richard Blumenthal (D-KS) that would require a government-run evaluation program for advanced AI systems before they can be deployed.
Liu acknowledged that his bill would not be as strict, but said he had a better chance of turning it into law.
“My goal is to get something into law this semester,” Liu said, noting that House Majority Leader Scalise is openly hostile to regulating artificial intelligence. “I’m not writing a bill that I would have if I were king. I’m trying to write a bill that can pass the Republican-controlled House, the Republican-controlled Senate, and the Republican-controlled White House.”