Communications and Technology

AI is Used Widely, but Lawmakers Have Set Few Rules: Jake Morabito in Stateline

Lawmakers’ interest in regulating technology surged during this legislative session, and is likely to grow further next year, thanks to the widespread adoption of ChatGPT and other consumer-facing AI tools.

Jake Morabito, ALEC Communications and Technology Task Force Director, was featured in Stateline, highlighting actions taken in states such as Connecticut in setting up safeguards with the increased use of artificial intelligence.

Artificial intelligence tools — defined broadly as technologies that can perform complex analysis and problem-solving tasks once reserved for humans — now frequently determine what Americans see on social media, which students get into college, and whether job candidates score interviews. But federal legislation has stalled, leaving regulation to local governments and creating a patchwork of state and municipal laws.

“Lawmakers’ interest in regulating technology surged during this legislative session, and is likely to grow further next year, thanks to the widespread adoption of ChatGPT and other consumer-facing AI tools.”

Like Connecticut, Vermont also plans to expand its AI oversight to the private sector in the future. Raiche said the state will likely accomplish that through a consumer data privacy law, which can govern the data sets underlying AI systems and thus serve as a sort of backdoor to wider regulation. California, Connecticut, Colorado, Utah and Virginia have also passed comprehensive data privacy laws, while a handful of jurisdictions have adopted narrower regulations targeting sensitive or high-risk uses of artificial intelligence.

“Governments should collaborate with industry and not come at it with this adversarial approach,” said Morabito, of ALEC. “Allow the market to lead here … a lot of private sector players want to do the right thing and build a trustworthy AI ecosystem.”

ALEC has proposed an alternative, state-based approach to AI regulation. Called a “regulatory sandbox,” the program allows businesses to try out emerging technologies that might otherwise conflict with state laws in collaboration with state attorneys general offices.

“Such sandboxes encourage innovation, Morabito said, while still protecting consumers and educating policymakers on industry needs before they draft legislation. Arizona and Utah, as well as the city of Detroit, have recently created regulatory sandboxes where companies can conduct AI experiments.”

Read the article here.