How States Can Help President Trump Usher in a Golden Age of AI
The overarching theme of President Trump’s second inaugural address was the beginning of a new Golden Age of America. Just one year from the 250th anniversary of our founding, the President invoked America’s heritage as a nation of “explorers, builders, innovators, entrepreneurs, and pioneers,” and he reminded us that “In America, the impossible is what we do best.”
President Trump clearly understands just how vital emerging technology will be in ushering in this new renaissance. American tech leaders from Amazon, Google, Meta, and X were front-and-center at his historic inauguration ceremony, a stark contrast from the previous administration’s often-adversarial posture toward the industry.
Instead of kneecapping our tech champions with punitive regulations and demands for censorship, the President sees the potential for American tech innovation as both an engine for economic growth at home and a national security imperative to combat foreign adversaries abroad. In his first week in office, President Trump rescinded his predecessor’s sweeping 2023 AI executive order, and instead announced a joint venture with OpenAI, Oracle, and SoftBank Group to bring as much as $500 billion in private sector investment to build artificial intelligence infrastructure in the U.S.
While the new administration is off to a promising start, state and local governments have an important role in our new Golden Age by fostering innovation at home, removing regulatory barriers that unreasonably impede AI development, and protecting American consumers from evidence-based harms.
As legislative leaders across the nation contemplate the best tech and AI regulatory framework for their states, here are three actionable steps legislators can take to support President Trump’s vision of a new Golden Age powered by American technology:
Step 1: Build State Expertise on AI, Separate Fact from Fiction
States have seen an unprecedented explosion in new legislation filed in recent years targeting emerging technologies such as artificial intelligence, automation, cryptocurrencies, and much more.
The pace of new state AI proposals seems exponential. In 2023, ALEC found that at least 130 state AI bills were filed across 23 states, while nearly 700 individual AI bills were filed across 45 states in 2024. Not even one full month into the 2025 state legislative sessions, some analysts estimate that over 300 AI bills have already been introduced, on pace to exceed 2024’s high watermark.
Therefore, it is important for all policymakers to separate the myths from the facts when it comes to legislating on AI. States should begin with the Model State Artificial Intelligence Act, one of ALEC’s Essential Policy Solutions for 2025, which helps states conduct an inventory of existing state laws applicable to AI use through an advisory Office of AI Policy, and identify any gaps in state law to address specific harms such as illegal deepfakes.
Step 2: Enforce Existing Laws to Protect Kids and Consumers from AI Threats
Since the debut of novel generative AI tools such as ChatGPT and Google Gemini, the AI regulatory landscape has often been incorrectly characterized as a lawless “Wild West,” necessitating sweeping regulations on the development and use of these new technologies. Policymakers and the public have expressed legitimate concerns about how bad actors might abuse AI to commit fraud, impersonate the voice and likeness of loved ones, or facilitate the creation and distribution of unlawful revenge pornography and child sexual abuse material (CSAM).
The good news is, in many cases, existing consumer protection and anti-discrimination laws are already sufficient to hold criminals accountable for illegal conduct, regardless of whether an AI tool was used. Last year, the Federal Communications Commission used the existing Telephone Consumer Protection Act of 1991 to address a deepfake robocall impersonating former President Biden in the New Hampshire presidential primary election.
Additionally, the Federal Trade Commission launched a new initiative called Operation AI Comply to crack down on companies and products that “use AI tools to trick, mislead, or defraud people,” violating existing consumer protection laws on the books. Some of the FTC’s cases targeting AI fraud under former chair Lina Khan were even bipartisan.
In cases where there is a genuine gap in the law that must be addressed, regulation should be narrowly tailored and focused on specific harmful conduct, not the underlying technology itself. Policymakers can look to ALEC’s two model policies on AI deepfake media—the Stop Deepfake CSAM Act and the Stop Non-Consensual Distribution of Intimate Deepfake Media Act—as positive examples of how existing laws can be updated to address tangible, real-world problems without hindering innovation.
Step 3: Remove Regulatory Barriers Impeding AI Research and Economic Opportunity
President Trump rightly repealed the Biden Administration’s sweeping AI executive order, and pledged to replace it with a better framework. While some states like California and Colorado are focused on regulating “algorithmic bias” or throttling the development of advanced AI models in the name of public safety, Utah developed an alternative approach to ease regulatory burdens while encouraging responsible AI innovation in the Silicon Slopes.
Utah’s new law created a first-in-the-nation Artificial Intelligence Learning Laboratory designed to foster collaboration among businesses, academia, stakeholders, and the legislature when considering proposed AI regulations. The AI Learning Lab also accepts applications for regulatory mitigation agreements, similar to regulatory sandboxes, that provide temporary waivers to existing regulations and pave the way for broader regulatory reform if the sandbox experiments are successful.
More states should follow Utah’s lead and make it easier for startups and small businesses to utilize novel AI in the marketplace while ensuring reasonable safeguards are in place to protect consumers from tangible harms.
Additional ALEC Resources
ALEC Model Policies:
Model State Artificial Intelligence Act
Universal Regulatory Sandbox Act
Stop Non-Consensual Distribution of Intimate Deepfake Media Act
ALEC Analysis:
ALEC Leads on ‘Sensible, Constructive’ Artificial Intelligence Policy
Generative AI: Should We Innovate or Regulate? It’s Time for Choosing