Federal v. State Frameworks for AI Regulation


Key Takeaways

  • On December 11, 2025, President Trump signed Executive Order 14365, “Ensuring a National Policy Framework for Artificial Intelligence.” The EO addressed key elements including: 1) Targeting State Regulations; 2) Federal Funding Pressure; 3) Preemption Strategy; 4) National Framework; 5) Targeted Exemptions; and 6) AI Policy Shift.
  • On March 20th, the White House released its anticipated draft framework as required by the EO.
  • Congress and the states have also been highly active in advancing AI legislation and policy frameworks and are showing no signs of slowing down; in 2025, over 40 states introduced about 250 bills related to government use of AI. 
  • Technology companies are seeking a balance between regulatory certainty and avoiding “heavy handed” rules that could stifle innovation or favor competitors, however, there is no single technology industry viewpoint.

White House Framework

The White House document states that the federal government must establish a federal AI policy framework to protect American rights, support innovation and prevent a fragmented patchwork of state regulations that would hinder national competitiveness, while respecting federalism and State rights.

Key elements of the White House draft framework include:

  • Congress should preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard consistent with these recommendations, not fifty discordant ones.
    • This national standard should respect key principles of federalism and not preempt:
      • The traditional police powers retained by the states to enforce laws of general applicability against AI developers and users, including particular laws to protect children, prevent fraud and protect consumers.
      • State zoning laws, including state authorities, to determine the placement of AI infrastructure.
      • Requirements governing a state’s own use of AI, whether through procurement or services they provide like law enforcement and public education.
  • Preemption must ensure that State laws do not govern areas better suited to the Federal Government or act contrary to the United States’ national strategy to achieve global AI dominance.
    • States should not be permitted to regulate AI development, because it is an inherently interstate phenomenon with key foreign policy and national security implications.
    • States should not unduly burden Americans’ use of AI for activity that would be lawful if performed without AI.
    • States should not be permitted to penalize AI developers for a third party’s unlawful conduct involving their models.

White House adviser David Sacks told Bloomberg Government that Congress could enact AI legislation within months, thus fulfilling the President’s pledge to create a national framework for regulating AI.

Congress – AI Legislation and Policy Advancements

Last September, Senator Ted Cruz [R-TX], Chairman of the Committee on Commerce, Science, and Transportation, released a legislative framework entitled the “Strengthening Artificial intelligence Normalization and Diffusion By Oversight and eXperimentation,” or SANDBOX Act. The bill creates a regulatory “sandbox” that would allow AI developers to apply to modify or waive regulations that could impede their work. The legislation is intended to improve transparency in lawmaking and encourage American ingenuity, leading to safe AI usage.

Two days before the White House AI framework was released, Senator Marsha Blackburn [R-TN] introduced her draft bill, the “The Republic Unifying Meritocratic Performance Advancing Machine intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act,” or TRUMP AMERICA AI Act. The bill would preempt state laws and regulations on frontier AI developers related to managing catastrophic risk. It also would largely preempt state laws addressing digital replicas to create a workable national standard. However, enforcement would be provided through the Federal Trade Commission and states. The bill would enable the U.S. Attorney General, state attorneys general and private actors to file suit to hold AI system developers liable for “harms caused by the AI system for defective design, failure to warn, express warranty, and unreasonably dangerous or defective product claims.”

The same day the national framework was released, Rep. Brett Guthrie (R-KY), Chairman of the House Committee on Energy and Commerce; Speaker Mike Johnson (R-LA); Majority Leader Steve Scalise (R-LA); Rep. Jim Jordan (R-OH), Chairman of the House Judiciary Committee; and Rep. Brian Babin (R-TX), Chairman of the House Committee on Science, Space, and Technology, issued a joint statement pledging to work on a bipartisan basis “to enact a national framework that unleashes the full potential of AI, cements the U.S. as the global leader, and provides important protections for American families.”

The States – AI Legislation and Policy Advancements

California, New York and Texas have enacted comprehensive laws on AI transparency and consumer safety. In 2024, Colorado and Utah were the first states to enact legislation on AI use in certain applications. Both states further amended their laws. Colorado delayed implementation of the AI Act, which addresses algorithmic discrimination, until June. Utah amended legislation to clarify that the law applies to regulated entities and to expand consumer protections for interactions with AI in mental health contexts.

In 2025, over 40 states – both blue and red –introduced about 250 bills related to government use of AI. The states include Kansas, Montana, Delaware, New Mexico, Illinois, Iowa, Maryland, Michigan, Ohio, Rhode Island, Connecticut and Tennessee.

In January of 2026, The National Governors Association (NGA) launched the “Working Group on AI & Future of Work,” which is comprised of governors’ advisors from a bipartisan set of NGA members who meet regularly to develop a Roadmap for Governors on AI & the Future of Work.

The working group convenes monthly to share best practices and common challenges and to receive briefings from leading national experts. The group is charged with producing a report to be delivered to incoming NGA members and incumbents in November 2026. The publication will include four primary elements:

  • Description of AI technologies, trends, and their potential impact upon the future of work.
  • Survey of the current state and federal policy landscape regarding AI and the future of work.
  • Policy opportunities and potential principles on AI and the future of work for Governors to consider.
  • Opportunities for Governors to innovate and lead by developing AI-enabled state government workforce and organizational models.

Following the release of the White House draft format, the National Governors Association stated the intent of Governors to use executive orders and signed legislation relevant to their respective states. NGA’s reaction is its intent to focus on a broader concern about the centralization of AI regulation and the importance of local governments in addressing early risks associated with emerging technologies.

What Technology Companies Want

While many industry leaders publicly call for oversight, their specific lobbying goals often focus on creating a favorable unified and innovation-friendly framework.

Technology company core regulatory goals include:

  • Federal preemption over state laws
  • Risk-based approach
  • Regulatory certainty for investment
  • Voluntary frameworks & self-regulation
  • Global competitiveness

However, there is no single technology industry viewpoint. Different types of companies want different things:

  • Big Tech (incumbents) often support some regulation, such as licensing requirements for powerful AI tools.
  • Startups generally favor “light touch” regulation to maintain their agility advantage; they fear that high compliance costs could bankrupt smaller firms before they reach profitability.
  • Open source v. closed Source: some companies advocate for open-source AI to challenge the concentration of power, while others argue that making highly powerful future models open-source would be irresponsible due to safety risks.

Looking Forward

The legislative kitchen is crowded with cooks. It’s no wonder technology companies are hoping for a single federal standard that would preempt state laws.

The political question is whether Members of Congress will support legislation which overrides and or contradicts laws enacted by their state legislatures and signed by their governors. Moreover, this is an election year with a very short legislative calendar.

Lobbying on AI has surged, with more than 640 companies engaging at the federal level in 2024, a 141% increase from the prior year. Parties interested in AI who are not involved in the public policy process can’t complain about the outcome.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *