ChatGPT Mania: What State Lawmakers Need to Know About Generative AI
ChatGPT gave us a peek into the future. Don’t regulate it out of existence.
Excitement around the popular new artificial intelligence (AI) tool ChatGPT has reached a fever pitch. Now that the initial novelty surrounding the technology has subsided a bit, people are just now wrapping their heads around what this technological breakthrough means for the not-too-distant future.
Yes, ChatGPT is impressive for what it is, but make no mistake: This nascent AI technology is still developing. Imagine a world where instead of yelling “Representative” into the phone and waiting in a customer service queue for hours, it would finally be faster and more convenient for an intelligent chatbot to handle customer refund requests or reschedule a transatlantic flight in minutes.
OpenAI, the developer of ChatGPT, has successfully delivered a highly desirable product with tangible, real-world applications for users. The groundswell of genuine enthusiasm around ChatGPT has prompted industry competitors, including Google, to invest in their own generative AI and chatbot services.
In this early stage of development, government should avoid the temptation to regulate AI out of existence from a position of fear before the technology can even get off the ground. ALEC believes a free-market, limited government approach will ensure America’s leadership in artificial intelligence, result in more consumer choice, and equip the next generation with the tools and the know-how to succeed in the years ahead.
Not surprisingly, some governments around the world already have tools like ChatGPT in their regulatory crosshairs. On the heels of the European Union’s effort to target American technology firms, EU officials have proposed a new Artificial Intelligence Act, and China has already implemented new restrictions to combat deepfakes and information Beijing considers “disruptive to the economy or national security.”
In DC, policymakers are considering strict legislative proposals that could overregulate autonomous systems and algorithms at the detriment to consumers and good faith actors in the AI space. At least one Member of Congress is calling for the federal government to “get a jump on regulating” generative AI tools like ChatGPT.
States should instead enact policies that support domestic AI innovation and allow industry to chart the path forward. Arizona, Utah, and several other states are leaders in this effort by adopting new regulatory sandbox laws that encourage innovators to experiment with new technologies in controlled environments. Under the universal regulatory sandbox framework, state attorneys general ensure consumers are protected and participants abide by transparency requirements, and if a test is successful, help pave the way for expansion on the open market.
This kind of out-of-the-box approach to regulation—pardon the pun—is exactly what’s needed at this pivotal juncture to cement the United States as the global leader in technology. For more information on ALEC’s Universal Regulatory Sandbox model policy and more pro-innovation ideas, please check out our full model policy library here.