YouTube’s likeness detection has arrived to help stop AI doppelgängers

https://arstechnica.com/google/2025/10/youtube-rolls-out-likeness-detection-to-help-creators-combat-ai-fakes/

Ryan Whitwam Oct 21, 2025 · 3 mins read
YouTube’s likeness detection has arrived to help stop AI doppelgängers
Share this

AI content has proliferated across the Internet over the past few years, but those early confabulations with mutated hands have evolved into synthetic images and videos that can be hard to differentiate from reality. Having helped to create this problem, Google has some responsibility to keep AI video in check on YouTube. To that end, the company has started rolling out its promised likeness detection system for creators.

Google’s powerful and freely available AI models have helped fuel the rise of AI content, some of which is aimed at spreading misinformation and harassing individuals. Creators and influencers fear their brands could be tainted by a flood of AI videos that show them saying and doing things that never happened—even lawmakers are fretting about this. Google has placed a large bet on the value of AI content, so banning AI from YouTube, as many want, simply isn’t happening.

Earlier this year, YouTube promised tools that would flag face-stealing AI content on the platform. The likeness detection tool, which is similar to the site’s copyright detection system, has now expanded beyond the initial small group of testers. YouTube says the first batch of eligible creators have been notified that they can use likeness detection, but interested parties will need to hand Google even more personal information to get protection from AI fakes.

Currently, likeness detection is a beta feature in limited testing, so not all creators will see it as an option in YouTube Studio. When it does appear, it will be tucked into the existing “Content detection” menu. In YouTube’s demo video, the setup flow appears to assume the channel has only a single host whose likeness needs protection. That person must verify their identity, which requires a photo of a government ID and a video of their face. It’s unclear why YouTube needs this data in addition to the videos people have already posted with their oh-so stealable faces, but rules are rules.

No guarantees

After signing up, YouTube will flag videos from other channels that appear to have the user’s face. YouTube’s algorithm can’t know for sure what is and is not an AI video. So some of the face match results may be false positives from channels that have used a short clip under fair use guidelines.

If creators do spot an AI fake, they can add some details and submit a report in a few minutes. If the video includes content copied from the creator’s channel that does not adhere to fair use guidelines, YouTube suggests also submitting a copyright removal request. However, just because a person’s likeness appears in an AI video does not necessarily mean YouTube will remove it.

YouTube has published a rundown of the factors its reviewers will take into account when deciding whether or not to approve a removal request. For example, parody content labeled as AI or videos with an unrealistic style may not meet the threshold for removal. On the flip side, you can safely assume that a realistic AI video showing someone endorsing a product or engaging in illegal activity will run afoul of the rules and be removed from YouTube.

While this may be an emerging issue for creators right now, AI content on YouTube is likely to kick into overdrive soon. Google recently unveiled its new Veo 3.1 video model, which includes support for both portrait and landscape AI videos. The company has previously promised to integrate Veo with YouTube, making it even easier for people to churn out AI slop that may include depictions of real people.

Google rival OpenAI has seen success (at least in terms of popularity) with its Sora AI video app and the new Sora 2 model powering it. This could push Google to accelerate its AI plans for YouTube, but as we’ve seen with Sora, people love making public figures do weird things. Popular creators may have to begin filing AI likeness complaints as regularly as they do DMCA takedowns.