Taylor Swift Files Trademarks to Protect Her Voice from AI Impersonations

Taylor Swift has filed a series of trademark applications aimed at shielding her from AI-enabled impersonations. While Swift already holds numerous trademarks, these recent filings—spotted by intellectual property attorney Josh Gerben—introduce a novel approach: protecting the timbre and character of her voice through what is known as a “sound mark.”

The applications, posted by Swift’s company on April 24, include two recordings. In one, she says, “Hey, it’s Taylor,” and in the other, “Hey, it’s Taylor Swift.” Though the recordings themselves are not groundbreaking, their purpose lies in establishing legal protection for Swift’s vocal identity.

The concept of protecting sound as a trademark is not new, though it remains relatively rare. Historically, singers relied on copyright law to protect their recorded music. But AI technologies now allow users to generate entirely new content that mimics an artist’s voice without copying an existing recording, creating a gap that trademarks may help fill.

— Josh Gerben, Gerben IP attorney

Gerben suggests that if an AI-generated imitation of Swift’s voice were to become the subject of litigation, she could argue that uses resembling her registered vocal trademarks infringe on her intellectual property rights. The strategy mirrors how NBC protects its iconic chimes, reflecting a broader trend among celebrities adapting to the AI age.

Celebrities Lead the Fight Against AI Deepfakes and Voice Clones

Swift’s approach is part of a growing trend among celebrities to combat AI-enabled impersonations and unauthorized uses of their likenesses. While established artists and actors have long battled fakes, the latest AI models have made producing these imitations easier and more scalable than ever.

Women, in particular, are frequent targets of deepfake operations, which often use their faces and bodies in nonconsensual pornographic content. Swift herself has faced such campaigns, including in early 2024, when AI-generated images of her spread widely on platforms like 4chan.

Other High-Profile Cases Highlight the AI Identity Crisis

Swift’s move to protect her voice via sound marks is just one example of celebrities installing guardrails in the AI era. In 2024, OpenAI paused the rollout of a ChatGPT voice that closely resembled Scarlett Johansson’s—prompting public criticism from the actress, who alleged the company had imitated her voice. (OpenAI later stated it used a different actor for the feature.)

Similarly, the family of Martin Luther King Jr. pressured OpenAI to remove likenesses of the civil rights leader from its video generation platform, Sora, before its shutdown. Meanwhile, YouTube announced it would expand its deepfake detection services to Hollywood, allowing celebrities to request the removal of AI-generated videos featuring their likenesses.

Other celebrities, including Matthew McConaughey, have also pursued trademark protections for their voices, signaling a broader shift in how public figures are responding to the challenges posed by AI.

Why Sound Marks Could Be a Game-Changer

Sound marks—trademarks that protect distinctive sounds—have historically been rare but are gaining traction as AI tools make voice cloning more accessible. Unlike copyright, which protects specific recordings, sound marks can safeguard the unique qualities of a voice itself, offering broader protection against AI-generated impersonations.

While the legal landscape remains untested, Swift’s filings represent a proactive step in an era where AI can effortlessly replicate human voices. The move underscores the urgent need for new frameworks to protect personal identity in the digital age.