FTC Intensifies Crackdown on AI-Powered Deepfakes and Scams

The Federal Trade Commission (FTC) is preparing to significantly expand its oversight of artificial intelligence (AI) misuse, particularly in combating nonconsensual sexualized deepfakes and voice cloning scams. This move comes as Congress and the Trump administration advance new legislative measures to address AI-driven harassment and exploitation.

Take It Down Act: A New Legal Framework for AI Abuse

Last year, Congress passed the Take It Down Act, a landmark law that criminalizes the creation, distribution, and hosting of nonconsensual intimate images—including those generated by AI. The legislation empowers authorities to prosecute individuals who share or distribute such content, marking a major step in combating digital abuse.

During a recent Senate oversight hearing, FTC Chair Andrew Ferguson praised the Take It Down Act as one of the “greatest legislative achievements” of the current Congress and President Donald Trump’s administration. He emphasized the FTC’s commitment to “robust enforcement,” signaling a proactive stance against AI-enabled exploitation.

First Conviction Under the Take It Down Act

Earlier this month, the Department of Justice secured its first conviction under the new law. James Strahler, a 37-year-old resident of Columbus, Ohio, pleaded guilty to using AI-generated deepfake nudes in a harassment campaign targeting at least six women. Strahler also admitted to creating deepfake pornography using photos of children in his neighborhood, highlighting the law’s role in protecting vulnerable groups.

Take Down Provisions Set to Launch in May

A key component of the Take It Down Act will take effect in May, allowing individuals to file “take down” notices with websites hosting sexual deepfakes. Under this provision, companies will have 48 hours to remove the content or face FTC investigation and enforcement.

At a March 30 conference in Washington, D.C., FTC Commissioner Mark Meador stated that while he hopes enforcement is unnecessary, the FTC is prioritizing the implementation of the take down provisions. “We are actively spinning everything up that we need to enforce the take down provision,” Meador said. He added that the FTC will wait for formal complaints before taking action against non-compliant companies, including those that fail to respond to user requests.

Potential Showdown with Tech Giants Over AI Deepfakes

The new law could lead to early confrontations with major tech companies, particularly those enabling the creation and distribution of nonconsensual deepfakes. xAI’s Grok tool, for example, has faced criticism for hosting such content, despite a prior scandal earlier this year.

When asked how the take down provisions might apply to Grok’s reported “mass nudification” of users, Meador clarified that the FTC cannot act until formal complaints are filed starting in May. “This is coming into place, and then if they don’t [remove the content], we would get the complaints and then we would go after them at that point,” he explained. “So, we kind of have to wait and see how companies respond to complaints and requests being made.”

xAI’s press office did not respond to requests for comment on its preparations to comply with the Take It Down Act.

FTC’s Strategic Focus on Child Protection

A strategic plan published this month by the FTC identified protecting children online as a “key concern.” The commission is exploring additional measures to safeguard minors, including new tools and resources under the Take It Down Act.

The plan states: “The commission is dedicated to exploring other ways the FTC can protect children and support families, including through its new authority under the Take It Down Act.”

Privacy lawyer Casey Waughn, a senior associate at Armstrong Teasdale, noted that the FTC’s expanded authority under the new law could lead to further regulatory actions targeting AI misuse.

Source: CyberScoop