AI Privacy

  • .Deepfake Detector Alarms Creators and Experts

    words.YouTube’s new “likeness detection” tool lets creators submit ID and a short biometric video to flag AI‑generated videos that misuse their faces. While Google says the biometric data is only for verification and the feature’s algorithm, its privacy policy leaves open the possibility of using such data to train Google’s AI models. Critics warn this could jeopardize creators’ control over their likenesses, especially as deep‑fake technology advances. YouTube is reviewing sign‑up wording but won’t change its underlying data‑use policy, and experts recommend creators avoid enrolling for now.

    2026年1月18日
  • CAMIA Attack Exposes AI Model Memorization

    Researchers have developed CAMIA, a novel Context-Aware Membership Inference Attack, that exposes privacy vulnerabilities in AI models by detecting data memorization during training. CAMIA outperforms existing methods by monitoring the evolution of model uncertainty throughout text generation, identifying subtle indicators of memorization at the token level. Evaluations on Pythia and GPT-Neo models showed significant accuracy improvements with CAMIA compared to previous attacks. This research highlights the privacy risks of training AI models on large datasets and emphasizes the need for privacy-enhancing technologies. CAMIA’s efficiency makes it a practical tool for auditing AI models.

    2025年9月26日