Communications and Technology

Balancing Free Speech and Online Safety in the AI Era

Is AI-generated speech protected under the First Amendment?

The developers of two leading generative AI services, ChatGPT and Character.AI, are facing new legal challenges that could have significant implications for the future of AI development and use in America.

In the case of Garcia v. Character Technologies, Florida mother Megan Garcia alleges that her teenage son’s “obsessive” interactions with a digital AI chatbot companion led to his tragic suicide last year. She filed a wrongful death suit claiming that Character Technologies negligently designed and marketed its chatbot, failed to warn of foreseeable risks, and created a “sexualized product” that could manipulate minors. Garcia seeks damages for her son’s death, as well as injunctive relief requiring stronger safety safeguards on AI platforms.

In response, Character.AI offered its condolences to Garcia’s family and has since implemented new safety features to prevent further tragedy. Still, the case raises an important question: Is AI-generated speech protected under the First Amendment?

Character Technologies maintains that its chatbot’s responses are expressive and therefore protected by the First Amendment, drawing comparisons to video games and fictional characters recognized as speech in Brown v. Entertainment Merchants Association. The defense also asserted users’ rights to receive information and ideas from the chatbot.

Garcia’s attorneys countered that AI cannot claim free speech protections because it lacks intent or consciousness, key elements of expression established by Texas v. Johnson. They cited Miles v. City Council of Augusta, where a talking cat named “Blackie” was held not to be a rights-bearing speaker under the Constitution, arguing that AI should be treated as a product, not as a person.

In May 2025, a U.S. District Court judge denied most of the defendants’ motion to dismiss, allowing the bulk of Garcia’s claims to proceed.

OpenAI, the developer of the ubiquitous ChatGPT large language model and emerging tools such as the Sora 2 video generation model, is facing a similar lawsuit that seeks to hold the AI juggernaut responsible for the tragic death of a 16-year-old in April 2025.

The plaintiffs are framing their suit under a strict product liability lens, alleging that their child’s death was a “predictable result of deliberate design choices.” OpenAI says that their chatbots have additional safeguards in place for minors to block harmful content, including instructions for self-harm and suicide. The company has also since created additional parental control options that allow parents and caregivers to control how ChatGPT responds to their teen and even send notifications when a teen is in a moment of acute distress.

If AI-generated output is determined not to be protected speech, government agencies will gain unprecedented broader authority to regulate it.

District Court Judge Anne C. Conway herself acknowledged that this is a weighty and unresolved issue. Treating AI output as unprotected could open the door to sweeping government oversight or censorship of AI communications.

The case also highlights mounting tensions between legal oversight and free-market innovation. Broad liability for AI-generated content risks chilling technological progress. Without First Amendment protections, only the largest companies will be able to absorb the costs of litigation and compliance. The burden could function as a de facto regulation, erecting significant barriers to entry and ultimately harming competition and consumers.

This concern was highlighted in ALEC’s publication, A Threat to American Tech Innovation: The European Union’s Digital Markets Act, which discusses the impact of the heavy-handed regulations that govern technology policy within much of Europe. The DMA and laws like it could have a chilling effect on innovation, especially for small and medium-sized firms that cannot afford to invest in research and development while maintaining compliance with ever-changing and expanding regulations.

As a first step, policymakers should look to ALEC’s Resolution in Support of Free Market Solutions and Enforcement of Existing Regulations for Uses of Artificial Intelligence for guidance. This resolution affirms that the role of government is not to pre-emptively restrict innovation, but to enforce existing laws clearly and narrowly, without curbing constitutionally protected speech.

Similarly, ALEC’s Statement of Principles for Teen Use of Social Media underscores the importance of empowering parents through transparency tools and age-appropriate safeguards, rather than imposing rigid mandates that burden expression or hinder innovation.

These principles apply equally to AI. Together, these frameworks offer a balanced path, one that emphasizes personal responsibility and targeted protections while preserving the free flow of ideas that are vital to both the First Amendment and a competitive innovation economy.