5 Takeaways from CNBC’s Investigation into Nudify Apps and Sites

In 2024, women in Minneapolis discovered a male acquaintance used the AI platform DeepSwap to create non-consensual, explicit deepfakes using their Facebook photos. This impacted over 80 women and highlighted the growing threat of easily accessible “nudify” apps. The victims are advocating for legislation in Minnesota to curb these services, as existing laws are insufficient. The deepfakes caused significant psychological distress. The case underscores the need for legal and ethical frameworks to address AI misuse while balancing innovation and individual protection. The origins of DeepSwap are unclear due to inconsistent information.

5 Takeaways from CNBC's Investigation into Nudify Apps and Sites

Jessica Guistolise, Megan Hurley and Molly Kelley discuss their experience with AI-generated deepfakes with CNBC in Minneapolis, Minnesota, on July 11, 2025. The women were targeted by a mutual friend who used the AI platform DeepSwap to create non-consensual, explicit content.

The summer of 2024 brought a disturbing discovery to a group of women in the Minneapolis area: a male acquaintance had been leveraging artificial intelligence to create sexually explicit images and videos using their publicly available Facebook photos. The tool of choice? The AI platform DeepSwap.

This individual secretly generated deepfakes targeting the group and over 80 women in the broader Twin Cities region. The revelation caused significant emotional distress and prompted the group to seek legislative intervention.

The incident underscores the growing threat posed by “nudify” apps and sites, as explored in recent investigations. These platforms have democratized the creation of non-consensual, explicit deepfakes, making it alarmingly easy for individuals to exploit AI for malicious purposes. Experts warn that such services are pervasive, often advertised via social media, readily available on mainstream app stores, and easily discoverable through basic web searches.

“The accessibility of this technology means that anyone can become a target,” warns Haley McNamara, Senior Vice President at the National Center on Sexual Exploitation.

This case highlights the complex legal and ethical challenges presented by AI and how ordinary citizens are becoming key players in the fight against AI-generated sexual abuse.

Here are five key takeaways from this evolving situation:

Legal Avenues Remain Limited

A legal paradox exists: because the victims were adults and the deepfake creator did not disseminate the images, existing laws appear insufficient to prosecute the perpetrator.

“As far as we know, no laws were broken,” explains Molly Kelley, one of the victims and a law student. “And that’s a major problem.”

Kelley and her fellow victims are now advocating for legislation in Minnesota, championed by State Senator Erin Maye Quade, that aims to curb the operation of nudify services within the state. The proposed bill would impose substantial fines on entities that facilitate the creation of deepfakes.

Maye Quade draws a parallel to existing voyeurism laws that prohibit surreptitiously taking explicit photos without consent. “We need to adapt our laws to keep pace with the rapid advancement of AI technology.”

The Psychological Impact is Profound

Jessica Guistolise, another victim, described experiencing ongoing anxiety and panic attacks since the incident occurred. Seemingly innocuous events, such as the click of a camera shutter, can trigger intense distress.

“The sound of a camera transported me back to those dark corners of the internet,” Guistolise recounts. “I saw myself in situations that I never consented to.”

Mary Anne Franks, a law professor at George Washington University and president of the Cyber Civil Rights Initiative, likens the experience to the trauma associated with “revenge porn,” where intimate images are shared online without consent, often by former partners.

“It’s a violation of bodily autonomy and a theft of identity,” Franks explains. “Victims feel like they’ve lost control of their own image.”

Deepfakes: Democratized and Dangerous

Making convincing deepfakes previously required specialized AI expertise. Now, nudifier services have lowered the barrier to entry, requiring only an internet connection and readily available photos.

Advancements in AI models have fueled the proliferation of these services. User-friendly apps bundle complex algorithms, empowering even those without technical skills to create deepfakes with ease.

While some nudify services include disclaimers about obtaining consent, there’s limited oversight or enforcement. Many market themselves deceptively as harmless “face-swapping” tools.

“Some apps masquerade as playful, yet their primary function is pornographic,” notes Alexios Mantzarlis, an AI security expert at Cornell Tech. “This ambiguity adds another layer of complexity to the issue.”

DeepSwap’s Murky Origins

DeepSwap, the platform used to create the deepfakes in the Minneapolis case, has a limited online footprint and uncertain origins.

A press release from July 2025 identified Penyne Wu as CEO and co-founder, listing a Hong Kong dateline. The media contact was Shawn Banks, identified as marketing manager.

Attempts to verify information about Wu and contact Banks were unsuccessful. The company’s physical location and legal jurisdiction have also shifted, adding to the opaqueness surrounding DeepSwap’s operations.

DeepSwap’s website currently identifies “MINDSPARK AI LIMITED” as its company name, lists an address in Dublin, Ireland, and states that its terms of service are governed by Irish law. However, previous versions of the page referenced Hong Kong instead. This inconsistent information further obscures the company’s true identity and accountability.

Balancing Innovation and Regulation

Senator Maye Quade’s bill proposes a fine of $500,000 per non-consensual, explicit deepfake generated in Minnesota by companies operating nudify services. However, some fear that federal policies aimed at promoting AI innovation could inadvertently undermine state-level efforts to curb its misuse. The challenge lies in fostering technological advancement while simultaneously implementing robust safeguards to protect individual rights and prevent harm.

Kelley expresses concern that efforts to win the AI race will come at the expense of protecting citizens from harm. “I’m concerned that we will continue to be left behind and sacrificed at the altar of trying to have some geopolitical race for powerful AI,” Kelley said.

Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/10058.html

Like (0)
Previous 2025年9月28日 am4:45
Next 2025年9月28日 am6:34

Related News