Five Days to Judgment: Commerce Department Targets State AI Laws in Historic Federal Power Grab
The most consequential AI regulatory evaluation in American history arrives March 11, potentially dismantling billions in state protections and setting up constitutional warfare between Washington and the states.

The Countdown Begins
On March 11, 2026—five days ago as businesses scrambled to understand the implications—the U.S. Department of Commerce was required to publish what may become the most destructive document in modern AI governance: a comprehensive evaluation identifying state AI laws deemed "onerous" by the Trump administration. The Department of Commerce was tasked with conducting a nationwide review of state AI statutes and regulatory proposals and submitting its findings to the White House by March 11, 2026.
This isn't regulatory housekeeping. The report, due by March 11, 2026, will: identify state AI laws that the administration deems inconsistent with federal policy and serve as the basis for potential federal enforcement, litigation, and legislative proposals aimed at establishing a national AI policy framework. It's the opening salvo in what legal experts describe as an unprecedented federal assault on state sovereignty in the digital age.
The evaluation stems from Executive Order 14365, signed December 11, 2025, which seeks to establish a uniform Federal policy framework for AI that preempts State AI laws that conflict with the policy set forth in this order. But this executive order lacks the constitutional force to actually overturn state law—that power belongs to Congress and the courts.
The Target List
Policy discussions surrounding the executive order indicate that the review is focusing on several categories of state AI regulation. These include: algorithmic discrimination laws governing automated decision systems, transparency obligations affecting generative AI models and training data, state regulation of AI-generated political content and deepfakes, and reporting or governance obligations imposed on AI developers. Comprehensive AI regulatory frameworks adopted or proposed in states such as Colorado, California, and New York have received particular attention in federal policy discussions.
Colorado's AI Act faces particular scrutiny. The Secretary of Commerce must publish an evaluation identifying burdensome state AI laws that conflict with federal policy, particularly those requiring AI to alter truthful outputs or compelling disclosures that may violate First Amendment protections. (The Colorado AI Act is specifically cited as an example.) The law, delayed until June 30, 2026, requires developers of high-risk AI systems to prevent algorithmic discrimination—precisely the kind of "bias mitigation" the Trump administration considers deceptive manipulation.
Members of Congress are already pressing the Commerce Department on which laws to target. Reps. Gabe Evans (R-Colo.) and Nick Langworthy (R-NY) wrote to Commerce Secretary Howard Lutnick on Feb. 19 urging the department to include Colorado's AI Act and New York's Responsible AI Safety and Education (RAISE) Act in the evaluation.
The Financial Weapon
The administration isn't relying solely on legal challenges. The Executive Order instructs the Department of Commerce to condition $42 billion in previously allocated broadband infrastructure funding appropriated under the Broadband Equity, Access and Deployment (BEAD) program on the repeal of state AI regulations deemed onerous.
The executive order goes beyond litigation, directing the Commerce Department to make states with flagged AI laws ineligible for nondeployment funds under the $42.45 billion Broadband Equity, Access and Deployment (BEAD) program. As of the latest update from the National Telecommunications and Information Administration (NTIA), the bureau in the Commerce Department overseeing the BEAD program, 50 of 56 state and territory proposals have been approved, with roughly $21 billion being spent on BEAD deployments and $21 billion remaining for nondeployment purposes.
This represents economic warfare disguised as policy coordination. States must choose between protecting their citizens from AI harm and receiving federal infrastructure funding. It's the kind of coercive federalism the Supreme Court has previously found unconstitutional, yet here it stands as explicit policy.
The Constitutional Crisis
Critically, preemption is not automatic. The Executive Order, standing on its own, lacks preemptive force, as it is not a statute enacted by Congress nor a regulation enacted pursuant to congressional authorization. Moreover, even where applicable, federal preemption is not self-executing.
The administration has established an AI Litigation Task Force within the Department of Justice, which beginning January 10, 2026, will be responsible for challenging state AI laws in federal court on the grounds that they unconstitutionally burden interstate commerce, are preempted by federal regulations, or are otherwise unlawful in the Attorney General's judgment.
But the legal theory is shaky at best. The Trump Administration's legal theory posits that if an AI model is trained on data reflecting societal patterns, forcing developers to alter the model's outputs to mitigate bias compels them to produce results that are less faithful to the underlying data. Under this interpretation, such mitigation renders the model less "truthful" and, therefore, deceptive. Policy statements are interpretive rather than binding regulations, and courts may reject the premise that correcting for bias constitutes deception.
This is Orwellian doublespeak elevated to federal policy. Bias correction becomes deception. Consumer protection becomes censorship. State sovereignty becomes federal obstruction.
The Resistance
In anticipation of the E.O. on November 25, 2025, the National Association of Attorneys General sent a letter on behalf of a bipartisan coalition of 36 state attorneys general to Congressional leaders, urging them to reject proposals for a federal moratorium that would prohibit states from enacting or enforcing laws addressing artificial intelligence (AI). The coalition emphasizes that while AI is a transformative technology with the potential to benefit sectors such as healthcare, public safety, and education, it also poses significant risks—especially to vulnerable populations, including children and seniors. Recent incidents have demonstrated how AI can be used to perpetrate scams, distort reality, and engage in inappropriate or harmful interactions with users.
To this point, the concept of widespread preemption of state AI law has faced strong bipartisan pushback, and there have been no public indications that suggest state governors and lawmakers will cede their ground.
The Limbo Economy
While this constitutional battle unfolds, businesses operate in regulatory purgatory. For the remainder of 2026, businesses operate in a precarious environment. They must continue to comply with state laws like California's SB 942 and Colorado's SB 24-205—which remain valid statutes—while preparing for a potential bifurcation of compliance standards should federal injunctions temporarily halt enforcement in specific jurisdictions.
However, unless and until courts invalidate state laws on preemption grounds, regulated parties must continue to comply with state AI regulations. The result is a compliance nightmare where companies must prepare for multiple scenarios while the federal government systematically undermines the authority of democratically elected state governments.
The Commerce Department evaluation, due five days ago, represents more than regulatory review—it's a declaration of war against federalism itself. In the digital age, when algorithmic decisions affect millions instantly, the question isn't whether we need AI governance.
It's whether we'll have democracy left to govern at all.