Sunstein Insights Shape Created with Sketch.

Back to All Publications

What’s in a Voice: How the Law is Being Used to Combat Deepfakes

Jackie Salwa

Jackie Salwa | Attorney View more articles

Jackie is a member of our Litigation Practice Group, Trademark Practice Group and Business Practice Group

Photo credit: Stutzman

While it may be amusing to hear a deepfake of President Biden singing “Baby Shark,” AI voice technology can just as easily be used for nefarious purposes, such as scams, extortion, and spreading misinformation. The ultimate harm caused by AI may be a loss of autonomy and dignity. Before recording technology, it was impossible to hear your own voice outside of your body. Since 1877, it has been possible to hear your recorded voice played back aloud. Now, in 2024, it is possible to hear your own voice without ever having spoken aloud to begin with.

In order to preserve our voices, an integral part of our identity, different governing bodies have been scrambling to catch up with the rapidly evolving field of artificial intelligence. Various approaches are being taken to safeguard our voices from being stolen and used without our knowledge. From statutes to case law to incentivizing industry growth via prize money, only time will tell which approaches will be effective at protecting this key attribute of our personhood.

Statutory Schemes

Over the years lawmakers have grown increasingly more ambitious with the scope of voice-based intellectual property laws.

In 2020, New York introduced regulations to enhance the right of publicity and penalize the unlawful dissemination of sexually explicit deepfakes. Deepfake pornography is one of the most common applications of AI software. It has been estimated that 96% of deepfakes are sexually explicit and feature women who didn’t consent to the creation of the content. This legislation was aimed at the unsettling applications of AI technology that have been used in a variety of malicious manners such as threatening journalists and government officials. Interestingly, the New York legislation includes a post-mortem right of publicity, allowing individuals and their heirs to control the commercial use of their name, voice, likeness, or digital replicas for up to 40 years after their death.

This year, the California State Assembly unanimously passed a bill requiring a natural voice to disclose when a bot is used in telephone calls. This measure aims to curb the use of AI in spam calls, a common and potentially dangerous application of AI voice technology. AI has been used to lift an individual’s voice off of their voicemail and replicate it in order to “call” and obtain money from the speaker’s family members.

With its rich musical heritage, Tennessee has taken the lead in protecting against the unauthorized use of voices. In March 2024, the Tennessee legislature passed the ELVIS Act which makes it a Class A misdemeanor to use AI to mimic a person’s voice without their permission. This law also allows for private enforcement of violations, targeting those who distribute technology primarily designed to reproduce a person's voice or likeness without authorization. This initiative represents the most ambitious effort by a state government to regulate deepfakes.

With growing national concern about safeguarding against AI replication of voices, it seems likely that federal legislation is soon to come. In January 2024, Congresswoman Maria Salazar introduced the No Artificial Intelligence Fake Replicas and Unauthorized Duplications Act of 2024 (No AI FRAUD Act), which seeks to provide federal property rights for individual likenesses and voices. This act aims to punish those who distribute AI voice-generating programs or unauthorized AI-generated voices, regardless of the distribution's purpose.


Where legislatures have stalled in making laws, courts have stepped in to create individual voice rights. American courts have not yet had a chance to hear many monumental cases on this topic, but interesting developments have occurred abroad.

In April 2024, for example, the Beijing Internet Court ruled on its first case regarding the infringement of personal liberties through the use of AI voices. The plaintiff, a voice actress, discovered that her recordings were being licensed to an AI software company without her consent. The case examined whether the China Civil Code should protect a person's voice similarly to their image, even if AI-generated. The court concluded that the AI-generated voice's characteristics were clearly attributable to the plaintiff, granting her protection under the Civil Code.

Recently, Scarlett Johansson’s attorneys publicly demanded that OpenAI disclose how it developed an AI personal assistant voice that the actress says sounds uncannily similar to her own, after she turned down an offer to voice their new feature. OpenAI denied the connection, but dropped the voice soon after her complaints went public. The Beijing Internet Court decision may lend some insight on how cases such as Scarlett Johansson's against OpenAI may fare once they have had a chance to be heard by American judges.


One interesting and unique method of trying to safeguard against the growth of companies developing AI is to incentivize the growth of companies policing AI. In November 2022, the FTC announced a challenge to incentivize ideas that protect consumers from the misuse of AI-enabled voice cloning. The FTC offered a prize for interventions that best performed one of three functions: prevention/authentication, detection/monitoring, or post-use evaluation.

Ultimately, four winning interventions were chosen and the teams split $35,000 in prize money. The winning solutions included an AI algorithm distinguishing genuine from synthetic voice patterns, software watermarking of voice recordings, authentication software, and voice cloning detection technology. This challenge was the sixth launched by the FTC under the America COMPETES Act to spur innovative consumer protection solutions. By creating a counter-balance to the malicious applications of AI, this approach may help to regulate the growing AI industry, preserving its many potentially applications while reducing its misuse.


The rapid advancements in AI technology have necessitated measures to protect individuals' voices and likenesses from unauthorized use. State-level initiatives like Tennessee's ELVIS Act represent significant steps toward safeguarding personal identities in the digital age. The case decided by the Beijing Internet Court further highlights global efforts to address AI misuse, setting a precedent for protecting personal liberties, and illustrating how American courts may handle their role in protecting an important facet of our identities. Lastly, the FTC's 2022 challenge to combat AI-enabled voice cloning fraud underscores the importance of innovation in consumer protection. These initiatives reflect a commitment to using innovation to protect personal rights in the face of potentially dangerous evolving technology.

We use cookies to improve your site experience, distinguish you from other users and support the marketing of our services. These cookies may store your personal information. By continuing to use our website, you agree to the storing of cookies on your device. For more information, please visit our Privacy Notice.

Subscribe to our Newsletters

Subscribe to: