VIDEO: ALEC Urges Biden Administration to Support Private Sector Leadership on AI Trust and Accountability
Regulators must tune out hyperbolic warnings of an AI apocalypse and follow the private sector’s strong lead to achieve trustworthy and responsible AI.
On June 12, 2023, the American Legislative Exchange Council (ALEC) filed regulatory comments responding to the National Telecommunications and Information Administration’s “AI Accountability Policy Request for Comment.” NTIA sought input to guide forthcoming federal regulation on building trust and accountability in artificial intelligence through a proposed system of audits, risk assessments, certifications, and other mechanisms.
ALEC highlights three foundational principles that NTIA and the Administration should adopt in any upcoming rules, regulations, or guidance on artificial intelligence policy.
- Open-source and voluntary transparency guidelines should build trustworthy AI.
- AI standards should be industry-led.
- Lawmakers should promote responsible AI experimentation across sectors.
Regulators must tune out hyperbolic warnings of an AI apocalypse and follow the private sector’s strong lead to achieve trustworthy and responsibly AI. Instead, the Administration should consider voluntary codes of conduct, industry-driven standards, and self-governance principles that can better adapt to AI’s novel challenges.
Here are some key excerpts from ALEC’s comment:
Unfortunately, many conversations on the future of AI are diminished by hyperbolic allusions predicting an AI-fueled apocalypse or some other existential risk to humanity […] Regulators must approach emerging technology with a sober, rational mind and not make decisions from a position of fear. Extreme rhetoric of this sort only detracts from serious conversations on the positive and negative effects of AI that deserve our attention.
If allowed to flourish in an open, free-market environment, generative AI will result in robust competition across the digital marketplace and expand consumer access to the latest and greatest AI tools. […] Halting or arbitrarily constraining American research on advanced AI, if such an undertaking is even possible, would be a colossal mistake.
At this preliminary phase of the generative AI development cycle, government officials should take the time to educate themselves on the nature of emerging AI systems, study their capabilities, separate fact from fiction, and learn from the decades of private sector research and enterprise case studies dedicated to achieving trustworthy and responsible AI. What governments should not do is rush to adopt overly restrictive laws and regulations just for the sake of “catching up” to bad policies being pursued on other continents or merely to “get a jump” on AI regulation.
Voluntary transparency practices in AI models could help solve the question of trust and encourage public adoption of the technology […] This way, consumers can make informed decisions about the products they choose and independent third parties can verify whether certain AI tools and LLMs function as advertised.
Instead of sealing off entire “high-risk” sectors as off-limits for AI, or charging an AI agency with the power to determine who can qualify for an AI license, policymakers should encourage responsible AI experimentation across sectors by considering universal or targeted regulatory sandbox policies that encourage innovation in emerging technologies.
Contrary to popular belief, AI systems are not completely unfettered from government regulation […] Current laws and jurisprudence prohibiting discrimination of protected classes like race and sex, providing for equal protection under the law, and offering legal redress for consumer protection violations do not evaporate when AI tools are at play.