Resemble AI has secured a $13 million strategic investment round dedicated to AI deep‑fake detection, bringing its total capital raised to $25 million. Investors in the round include Berkeley CalFund, Berkeley Frontier Fund, Comcast Ventures, Craft Ventures, Gentree, Google’s AI Futures Fund, IAG Capital Partners and several other venture partners.
The infusion arrives at a moment when organizations face mounting pressure to authenticate digital content. Advances in generative AI have lowered the barrier for creating convincing deep‑fakes, contributing to an estimated $1.56 billion in fraud losses in 2025. Industry analysts project that generative AI could drive up to $40 billion in fraud-related losses in the United States alone by 2027.
Recent high‑profile incidents illustrate how rapidly the threat landscape is evolving. In Singapore, a coordinated scam that leveraged caller‑ID spoofing, voice deep‑fakes and aggressive social‑engineering stole more than SGD 360,000 from 13 victims. The attackers impersonated a major telecommunications provider and the Monetary Authority of Singapore, exploiting public trust in government and telecom brands.
Deep‑fake detection tools and emerging AI capabilities
Resemble AI builds real‑time verification solutions that enable enterprises to detect AI‑generated audio, video, images and text. The new funding will accelerate the global rollout of its deep‑fake detection platform, which recently introduced two flagship offerings:
- DETECT‑3B Omni, a detection model optimized for enterprise environments that the company says achieves 98 % accuracy across more than 38 languages.
- Resemble Intelligence, an explainability platform for multimodal AI‑generated content that integrates Google’s Gemini 3 models to surface the provenance of suspicious media.
Resemble AI positions these solutions as part of a broader push toward real‑time verification for both human users and AI agents interacting with digital media. According to internal metrics, DETECT‑3B Omni is already deployed in sectors such as entertainment, telecommunications and government. Independent benchmark scores on Hugging Face rank the model among the top performers for image and speech deep‑fake detection, with a lower average error rate than competing solutions.
Industry observers note that the rapid maturation of generative AI is reshaping enterprise strategies around content trust and identity assurance. Representatives from Google’s AI Futures Fund, Sony Ventures and Okta have highlighted a shift toward multi‑layer verification frameworks designed to preserve confidence in authentication workflows.
Alongside the investment announcement, Resemble AI published a forward‑looking assessment of deep‑fake‑related risk trends for 2026. The outlook outlines four key dynamics that could influence enterprise planning:
Deep‑fake verification may become standard for official communications. Following several high‑profile incidents involving government officials, the company expects real‑time detection to become a prerequisite for official video conferences, spurring new procurement cycles and heightened adoption in the public sector.
Organizational readiness could differentiate competitive positioning. As jurisdictions introduce AI‑specific regulations, firms that embed training, governance and compliance processes early are likely to meet operational and regulatory demands more effectively.
Identity will emerge as a central pillar of AI security. With impersonation attacks increasingly leveraging synthetic media, enterprises are expected to adopt identity‑centric security models—such as zero‑trust architectures that verify both human and machine identities.
Cyber‑insurance premiums may rise. The growing frequency of corporate deep‑fake incidents is prompting insurers to reassess policy terms. Companies lacking robust detection tools could face higher premiums or stricter coverage limits.
The investment underscores a broader industry realization: generative AI is reshaping risk exposure across every sector. Corporations are now evaluating how verification technology, identity safeguards and incident‑response readiness can be woven into comprehensive security and compliance strategies.

Original article, Author: Samuel Thompson. If you wish to reprint this article, please indicate the source:https://aicnbc.com/14206.html