Communications and Technology

California Gov. Newsom Vetoes Controversial Artificial Intelligence Bill

Gov. Newsom vetoed CA SB 1047 but enacted at least a dozen other bills regulating artificial intelligence and related technologies.

California Governor Newsom has vetoed SB 1047, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, that would have amounted to the strictest regulations in the country on the underlying large language models (LLMs) that fuel today’s emerging generative AI technologies, such as OpenAI’s ChatGPT and Google Gemini.

Governor Newsom explained in his veto message that, “The bill applies stringent standards to even the most basic functions […]  I do not believe this is the best approach to protecting the public from real threats posed by the technology.”

If SB 1047 had gone into effect in California, government bureaucracy would have severely throttled private sector innovation in LLMs nationwide by arbitrarily limiting the lawful amount of computing power permitted to train generative AI models. This government restriction on AI was proposed in the name of public safety, as an attempt to prevent  bad actors from unleashing “critical harms” on society, including the use of AI to develop a bioweapon, cyberattacks on critical infrastructure resulting from AI use, or “other grave harms to public safety and security.”

Despite good faith efforts from industry and the AI research community to prioritize trust, safety, and risk mitigation in their products, SB 1047 would also have exposed AI companies to civil and criminal liability for the illegal behavior of third-party users. If unable to prove that their models do not possess a “hazardous capability” before testing, LLM developers would have been targeted by the regulatory state, amounting to a functional ban on open-source AI models, such as Meta’s Llama 3, that intentionally share resources like developer toolkits with the broader open-source community.

Although SB 1047 failed to advance, Governor Newsom did enact at least a dozen other laws regulating various aspects of AI in California. Some of these laws, including AB 2839’s restrictions on the use of AI in election material and political communications, have already prompted legal challenges on constitutional grounds.

Instead of mimicking California’s pitfalls on AI regulation, states should follow ALEC’s principled approach to AI regulation, using existing consumer protection and criminal statutes on the books to address demonstrable harms and hold criminals accountable for illegal deepfake content. This will allow policymakers to strike a proper regulatory balance that protects Americans, while responsibly accelerating the development of promising generative AI tools on our shores.