.Deepfake Detector Alarms Creators and Experts

words.YouTube’s new “likeness detection” tool lets creators submit ID and a short biometric video to flag AI‑generated videos that misuse their faces. While Google says the biometric data is only for verification and the feature’s algorithm, its privacy policy leaves open the possibility of using such data to train Google’s AI models. Critics warn this could jeopardize creators’ control over their likenesses, especially as deep‑fake technology advances. YouTube is reviewing sign‑up wording but won’t change its underlying data‑use policy, and experts recommend creators avoid enrolling for now.

..Deepfake Detector Alarms Creators and Experts

Beata Zawrzel | Nurphoto | Getty Images

A new YouTube tool that leverages creators’ biometric data to flag and remove AI‑generated videos using their likeness also grants Google access to that sensitive information for training its own AI models, experts told CNBC.

In response to concerns from intellectual‑property specialists, YouTube clarified that Google has never used creators’ biometric data to train its models. The company said it is reviewing the wording of the tool’s sign‑up form to avoid confusion, but it will not amend the underlying policy governing data use.

This friction underscores a growing tension inside Alphabet. While Google accelerates its AI initiatives, YouTube is focused on preserving trust with creators and rights‑holders whose revenue depends on the platform.

Launched in October, YouTube’s “likeness detection” feature flags any video where a creator’s face appears without permission, a capability that is being rolled out to millions of participants in the YouTube Partner Program as AI‑manipulated content proliferates across social media.

The system scans newly uploaded videos for facial alterations generated by AI. To activate the protection, creators must submit a government‑issued ID and a short biometric video. This biometric data is used solely for identity verification and to power the likeness‑detection algorithm.

Critics note that by tying the tool to Google’s broader privacy policy, YouTube leaves open the possibility that creators’ biometrics could be repurposed for future AI training. Google’s privacy policy states that public content, including biometric information, may be used “to help train Google’s AI models and build products and features.”

“Likeness detection is optional, but it requires a visual reference to function,” said YouTube spokesperson Jack Malon. “Our approach to that data is not changing. As our Help Center has stated since launch, the data provided for this tool is used only for identity verification and to power this specific safety feature.”

YouTube indicated it is “considering ways to make the in‑product language clearer,” though no specific revisions or timelines were disclosed.

Industry observers remain cautious. Dan Neely, CEO of Vermillio—a company that protects likeness rights—warned that as Google races to dominate AI, “creators need to think carefully about whether they want their face controlled by a platform rather than owned by themselves.” He added that a likeness is likely to become one of the most valuable assets in the AI era, and relinquishing control could be irreversible.

Third‑party firms such as Vermillio and Loti specialize in monitoring and enforcing likeness rights across the internet. Loti’s CEO Luke Arrigoni described YouTube’s current biometric policy as “enormously risky,” noting that the framework could allow malicious actors to combine a person’s name with their facial biometrics to produce highly convincing synthetic media.

Both Neely and Arrigoni advised against enrolling creators in YouTube’s likeness‑detection program at this stage.

Amjad Hanif, Head of Creator Product at YouTube, said the system was built to operate “at the scale of YouTube,” where hundreds of hours of new footage are uploaded every minute. He expects the feature to be available to the more than 3 million creators in the Partner Program by the end of January.

“We do well when creators do well,” Hanif told CNBC. “We’re stewards and supporters of the creator ecosystem, and we’re investing in tools to help them succeed.”

The rollout arrives amid rapid advances in AI video generation, which are making it easier for bad actors to copy a creator’s face and voice—capabilities that can undermine the credibility of creators whose personal brand is central to their business.

YouTuber Doctor Mike, whose real name is Mikhail Varshavski, makes videos reacting to TV medical dramas, answering health‑related questions and debunking internet myths.

Doctor Mike

Dr. Mikhail Varshavski, known on YouTube as Doctor Mike, said he uses the likeness‑detection tool to review dozens of AI‑manipulated videos each week.

Varshavski has built a channel of over 14 million subscribers by leveraging his credentials as a board‑certified physician to provide reliable health information. He explained that the rapid evolution of AI has made it simple for malicious creators to produce deepfakes that mimic his face and voice, potentially spreading dangerous medical advice.

He first encountered a deepfake on TikTok in which an AI‑generated avatar promoted a “miracle” supplement that he had never heard of. “It freaked me out,” he said. “I’ve spent a decade building trust with my audience. Seeing my likeness used to sell a product that could harm people is terrifying.”

Recent AI video generators such as Google’s Veo 3 and OpenAI’s Sora have lowered the barrier to creating high‑quality deepfakes. These tools are trained on massive datasets that include publicly available YouTube videos—potentially hundreds of hours of Varshavski’s own content.

Varshavski noted that deepfakes have become “more widespread and proliferative,” with entire channels dedicated to weaponizing synthetic media for scams or harassment.

Currently, creators cannot monetize unauthorized uses of their likeness, unlike the revenue‑sharing mechanisms provided by YouTube’s Content ID system for copyrighted works. Hanif said the company is exploring a model that could extend similar compensation to creators whose facial data is used by AI.

Earlier this year, YouTube gave creators the option to allow third‑party AI companies to train on their videos. Millions have opted in, yet no compensation has been promised.

Hanif confirmed that the team is still refining the detection algorithm. Early testing has shown promising results, though specific accuracy metrics were not disclosed.

When it comes to takedown requests, the volume remains low. Many creators choose not to remove flagged videos, often preferring to monitor the content rather than delete it. “By far the most common action is to say, ‘I’ve looked at it, and I’m okay with it,’” Hanif explained. Industry advocates suggest this low removal rate may stem from confusion and lack of awareness rather than genuine acceptance of AI‑generated content.

Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/13914.html

Like (0)
Previous 1 hour ago
Next 1 hour ago

Related News