The phrase “move fast and break things” carries pretty negative connotations in these days of (Big) techlash. So it’s surprising that state and federal policymakers are doing just that with the latest big issue in tech and the public consciousness: generative AI, or more specifically its uses to generate deepfakes.

Creators of all kinds are expressing a lot of anxiety around the use of generative artificial intelligence, some of it justified. The anxiety, combined with some people’s understandable sense of frustration that their works were used to develop a technology that they fear could displace them, has led to multiple lawsuits.

But while the courts sort it out, legislators are responding to heavy pressure to do something. And it seems their highest priority is to give new or expanded rights to protect celebrity personas–living or dead–and the many people and corporations that profit from them.

The broadest “fix” would be a federal law, and we’ve seen several proposals this year. The two most prominent are NO AI FRAUD (in the House of Representatives) and NO FAKES (in the Senate).  The first, introduced in January 2024, the Act purports to target abuse of generative AI to misappropriate a person’s image or voice, but the right it creates applies to an incredibly broad amount of digital content: any “likeness” and/or “voice replica” that is created or altered using digital technology, software, an algorithm, etc. There’s not much that wouldn’t fall into that category—from pictures of your kid, to recordings of political events, to docudramas, parodies, political cartoons, and more. It also characterizes the new right as a form of federal intellectual property. This linguistic flourish has the practical effect of putting intermediaries that host AI-generated content squarely in the litigation crosshairs because Section 230 immunity does not apply to federal IP claims. NO FAKES, introduced in April, is not significantly different.

There’s a host of problems with these bills, and you can read more about them here and here. 

A core problem is that these bills are modeled on the broadest state laws recognizing a right of publicity. A limited version of this right makes sense—you should be able to prevent a company from running an advertisement that falsely claims that you endorse its products—but the right of publicity has expanded well beyond its original boundaries, to potentially cover just about any speech that “evokes” a person’s identity, such as a phrase associated with a celebrity (like “Here’s Johnny,”) or even a cartoonish robot dressed like a celebrity. It’s become a money-making machine that can be used to shut down all kinds of activities and expressive speech. Public figures have brought cases targeting songs, magazine features, and even computer games. 

And states are taking swift action to further expand publicity rights. Take this year’s digital replica law in Tennessee, called the ELVIS Act because of course it is. Tennessee already gave celebrities (and their heirs) a property right in their name, photograph, or likeness. The new law extends that right to voices, expands the risk of liability to include anyone who distributes a likeness without permission and limits some speech-protective exceptions.  

Across the country, California couldn’t let Tennessee win the race for most restrictive/protective rules for famous people (and their heirs). So it passed AB 1836, creating liability for anyo ne person who uses a deceased personality’s name, voice, signature, photograph, or likeness, in any manner, without consent. There are a number of exceptions, which is better than nothing, but those exceptions are pretty confusing for people who don’t have lawyers to help sort them out.

These state laws are a done deal, so we’ll just have to see how they play out. At the federal level, however, we still have a chance to steer policymakers in the right direction. 

We get it–everyone should be able to prevent unfair and deceptive commercial exploitation of their personas. But expanded property rights are not the way to do it. If Congress really wants to protect performers and ordinary people from deceptive or exploitative uses of their images and voice, it should take a precise, careful and practical approach that avoids potential collateral damage to free expression, competition, and innovation. 

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2024.

Related Issues