Key Takeaways from Artificial Intelligence Week and President Biden’s AI Executive Order
Last week was massive for artificial intelligence (AI) policy news in the United States and abroad. As we approach the one-year anniversary of ChatGPT’s public launch later this month, the AI policy debate will clearly only accelerate heading into next year. Here are 3 key takeaways state legislators and the American public should know to stay up to date:
- Federal and global leaders urgently call for more government intervention to ensure “Trust and Safety” in AI.
Kicking off AI Week, President Biden released a massive 100+ page executive order on October 30, 2023, launching a whole-of-government effort to ensure the “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. By the President’s own admission, this unilateral order—taken without a vote from the people’s elected representatives in Congress—represents “the most sweeping actions ever taken to protect Americans from the potential risks of AI.” The executive order continues a worrisome trend on the Left to discard our nation’s decades-long tradition of light-touch, market-oriented regulation in favor of strict government supervision of emerging technologies and the internet.
Vice President Kamala Harris carried this message to the United Kingdom later in the week while representing the United States at Prime Minister Rishi Sunak’s inaugural AI Safety Summit. Government officials, leading AI developers, and civil society groups gathered to “understand the risks” due to a “loss of control” of AI systems. Addressing an audience at the U.S. Embassy in London, Vice President Harris called for an expanded role for the U.S. government in the development and deployment of AI systems to mitigate the “existential threats of AI that could endanger the very existence of humanity.”
Meanwhile, back in Washington, Senate Majority Leader Chuck Schumer recently hosted the third installment of his AI Insight Forums, convening a select group of representatives from industry, academia, labor unions, and think tanks to study the workforce impacts of artificial intelligence.
Coming out of the AI Safety Summit, a subset of leaders remains utterly captivated by these ethereal narratives of rogue AI systems leading to the demise of the human race. Alarmingly, these apocalyptic warnings are often used to justify proposals to intentionally halt or limit AI innovation—ceding an advantage to global rivals like China and the United Arab Emirates—in the name of safety.
However, some in the AI development community, including Stanford University professor Andrew Ng, are pushing back against the calls for strict regulation in the name of preventing an AI apocalypse. Posting on X, Ng explained: “Overhyped fears about AI leading to human extinction are causing real harm […] Hype about harm is also being used to promote bad regulation worldwide, such as requiring licensing of large models, which will crush open-source and stifle innovation.”
- Biden Administration establishes mandatory reporting requirements on frontier AI models and Commerce Department to develop guidelines on authenticating and watermarking AI content.
A cornerstone of the Administration’s new effort to regulate AI systems, President Biden invoked the Defense Production Act to require AI companies to comply with new mandatory “red-team” safety test requirements. If agencies determine that a foundation model is deemed a serious risk to national security, economic security, or national public health and safety, companies must notify the U.S. government when training the model, submit safety test results, and provide “other critical information” to the government. Such overly broad definitions of “artificial intelligence” and the question of what exactly the Administration considers a security risk threaten to ensnare companies developing common and benign software and algorithms, such as the AI tools powering search engines and spell-check.
While the Administration’s intent to promote transparency may be reasonable and merits a fair debate, the executive order is a departure from the voluntary commitments secured from various private sector AI companies earlier this year. As Kristian Stout of the International Center for Law and Economics noted, violations of the Defense Production Act are federal felonies and can result in fines of up to $10,000 and even imprisonment. This new web of regulatory red tape, coupled with the potential for severe federal penalties, may have an adverse impact on startups hoping to break into this space and incumbent software developers alike.
Finally, President Biden has instructed the Department of Commerce to develop new guidance for content authentication and watermarking to label content generated by AI. Regulators should carefully consider the scope of which AI systems are covered when implementing such guidance. Depending on how they are defined, “artificial intelligence” and “algorithms” in some form are already integrated into much of the consumer and enterprise software products most Americans use on a daily basis—from Zoom video calls, to word processing on Google Docs or Microsoft Word, to spam filters protecting our mobile devices and email inboxes.
Slapping a “This content was generated in part by AI” disclosure label or watermark on any content that could have plausibly been created using AI would inundate the public with hundreds of warnings each time they access a smartphone or computer, putting California’s Prop 65 warnings to shame. Policymakers should instead follow the lead of private sector efforts like the Content Authenticity Initiative, which is already making strides toward verifying authentic content online.
- Positive Developments: Supporting the use of AI in education and workforce development, guidance for effective government agency use of AI, and attracting and retaining skilled AI workers.
Some bright spots in the executive order include efforts to support educators in applying AI tools in the classroom, new guidance advancing the federal government’s use of AI to better serve constituents, and plans to attract and retain the necessary skilled AI professionals needed to remain competitive in this growing field. Importantly, President Biden has instructed federal agencies to avoid “imposing broad general bans or blocks on agency use of generative AI” and to “provide their personnel and programs with access to secure and reliable generative AI capabilities, for the purposes of experimentation and routine tasks.”
As ALEC noted in a recent regulatory filing to the Biden Administration’s NTIA, regulators should focus on how to make it easier for American innovators to build novel AI solutions and turn to the plethora of existing consumer protection, anti-discrimination statutes, and case law to resolve concerns. Contrary to the popular narrative, artificial intelligence is not an unregulated “Wild Wild West.” Even Vice President Harris conceded in her recent speech that “there are many existing laws and regulations that reflect our nation’s longstanding commitment to the principles of privacy, transparency, accountability, and consumer protection. These laws and regulations are enforceable and currently apply to AI.”
All signs unfortunately point to more domestic and international regulatory agencies restricting the free market and inhibiting AI’s development in the name of safety and security. While dystopian science-fiction stories may make for good entertainment, they are a poor substitute for thoughtful debate and analysis of the issues as they appear before us today.
Lawmakers should remember to ground any regulatory proposals for such a consequential technology in the facts and not rely on unfounded hypotheticals—no matter how tantalizing—to determine U.S. policy.