Communications and Technology

Protecting Taylor Swift and All Americans from Illegal Deepfakes

AI deepfakes pushing state lawmakers to act.

As state legislatures across the nation gavel in for the 2024 session, artificial intelligence (AI) remains at the top of policymakers’ to-do lists. Some analysts estimate that hundreds of state bills targeting AI have already been introduced in 2024 or carried over from the 2023 session, covering a wide range of topics from the use of AI in elections and political campaigns, to more wide-reaching and sweeping regulations of generative AI tools.

One such question before lawmakers, the problem of AI-generated deepfakes, was once again thrust into the public consciousness last week as nonconsensual explicit and abusive images of pop sensation Taylor Swift went viral across social media platforms. Recent reporting indicates that X has taken some steps to address the issue, but the situation prompted calls for immediate action. What should states do to hold these bad actors accountable?

Unfortunately, the Taylor Swift case is far from the only occurrence. Beyond A-list celebrities, abusers have routinely taken advantage of society’s most vulnerable and flagrantly disregarded the terms of service for digital platforms to perpetuate their illegal activity.

Last November, The Wall Street Journal reported a horrific incident at a New Jersey high school of several young women —including one 14-year old student—who were bullied by classmates that widely circulated fake, AI-generated nude photos to their peers in group chats. For the many everyday victims, parents, school administrators, and more left to pick up the pieces, there is often no clear pathway available in the judicial system to bring abusers to justice. These cases rarely receive the same level of attention as Taylor Swift.

According to a USA Today analysis of the state legal landscape for pornographic deepfakes, only 10 states directly address AI-generated content in their revenge pornography statutes—California, Florida, Georgia, Hawaii, Illinois, Minnesota, New York, Texas, South Dakota, and Virginia. For remaining jurisdictions, it is not always clear whether existing laws as written would cover harmful content generated in part by AI.

Recognizing the need to address potential ambiguity in existing laws on this subject, ALEC members adopted two new model policies last December: The Stop Non-Consensual Distribution of Intimate Deepfake Media Act and the Stop Deepfake CSAM Act. This simple language precisely fixes potential gaps or ambiguity in state civil and criminal codes to ensure that malicious, nonconsensual deepfakes are punishable under existing revenge pornography laws.

State lawmakers can take substantive action by adopting these common-sense ALEC model policies without delay, closing any possible loopholes and ensuring legal recourse for victims of nonconsensual deepfakes and child sexual abuse material.