The Federal Trade Commission (FTC) will begin enforcing a key provision of the Take It Down Act on May 19, requiring websites and online services to remove nonconsensual deepfake media within 48 hours after a victim’s notice—or risk fines and FTC investigation.
The law, passed by Congress in 2023, initially allowed law enforcement to prosecute individuals creating and posting such content. However, platforms hosting the material were given a year to develop reporting and takedown systems. Under the new enforcement regime, businesses failing to remove flagged media within the 48-hour window could face penalties.
This week, FTC Chair Andrew Ferguson sent letters to private-sector companies outlining how the commission will monitor compliance. The FTC set a maximum civil penalty of $53,088 per violation for noncompliant companies. Ferguson’s letter also requires platforms to:
- Make it easy for users to submit takedown requests
- Provide clear instructions for reporting violations without requiring an account
- Detail their reporting and removal programs on their websites in plain language
- Display “clear and conspicuous” notice to users about removal requests
“We stand ready to monitor compliance, investigate violations, and enforce the Take It Down Act,” Ferguson said in a statement. “Protecting the vulnerable—especially children—from this harmful abuse is a top priority for this agency and this administration.”
The FTC’s enforcement applies to websites, apps, social media, image or video sharing services, and gaming platforms. Ferguson’s letters were sent to major tech and social media companies, including Amazon, Alphabet, Apple, Automattic, Bumble, Discord, Match Group, Meta, Microsoft, Pinterest, Reddit, SmugMug, Snapchat, TikTok, and X.
The law covers both nonconsensual intimate imagery using real photos and AI-generated or modified “digital forgeries.” The FTC also recommends companies implement hashing technologies to prevent reappearance of removed content and share findings with nonprofits like the National Center for Missing and Exploited Children and StopNCII.org to track violations across the internet.
Earlier this year, Grok, the AI service accessible to X users, was used to flood the platform with nonconsensual, sexualized deepfakes of real people. Elon Musk, owner of X, initially dismissed criticism but later faced multiple criminal and civil investigations, lawsuits, and calls from world leaders to ban the app entirely.
“Some elements of the FTC’s approach—like requiring clear and simple reporting options for victims—aligns with best practices established by civil society groups.” — Becca Branum, Director of the Free Expression Project at the Center for Democracy and Technology