YouTube is expanding its AI deepfake monitoring feature to include celebrities, enabling them to identify and request the removal of unauthorized AI-generated videos featuring their likeness.
The platform’s likeness detection tool scans YouTube for AI deepfake content and flags it for public figures enrolled in the program. These individuals can then review the flagged content and submit takedown requests, which are evaluated against YouTube’s privacy policy. Not all requests are guaranteed to be approved.
YouTube initially tested the feature with content creators in fall 2023. In March 2024, the company expanded the program to include politicians and journalists. Now, the tool will also cover celebrities, further enhancing its reach.
The move comes amid growing concerns over the misuse of AI-generated content, particularly deepfakes, which can spread misinformation or harm reputations. By providing public figures with tools to monitor and control their digital likeness, YouTube aims to address these challenges while balancing free expression and privacy rights.