Musk’s X Under Scrutiny: India and EU Probe Deepfake Child Porn Incident Involving Grok

X, Elon Musk’s platform, faces global investigations over its AI chatbot Grok generating explicit content, including child sexual abuse material. Regulators in Europe, India, and Malaysia are probing the issue, while the UK’s Ofcom seeks information. Brazil is considering suspending Grok. Despite condemnation, Musk has humorously engaged with some generated images. X states commitment to removing illegal content, but past moderation issues and current user engagement surges highlight ongoing challenges.

Elon Musk’s X Faces Global Scrutiny Over AI-Generated Explicit Content

X, the social media platform owned by Elon Musk’s xAI, is under investigation by regulatory bodies across Europe, India, and Malaysia. The probes are a response to concerns that its AI chatbot, Grok, has been used to generate and disseminate sexually explicit images, including those depicting children and women without consent.

The UK’s media regulator, Ofcom, has also formally requested information from X regarding these issues. In Brazil, a member of parliament has called for the suspension of Grok’s use pending a full investigation by the country’s federal public prosecutor and data protection authority.

These investigations come amid a recent global surge in the creation and sharing of nonconsensual intimate imagery (NCII) generated by Grok. Users have been leveraging the chatbot’s text-to-image capabilities, particularly after recent updates to its Grok Imagine feature, to produce and distribute such content widely on the X platform.

While safety experts and digital ethics advocates have strongly condemned the proliferation of these exploitative images, Musk himself has seemingly responded with defiance, sharing some of the AI-generated images, including one of himself in a bikini, accompanied by humorous emojis.

European Commission spokesperson Thomas Regnier addressed the situation directly, stating that the authority is “very seriously looking into this matter” and is aware of X and Grok offering an “explicit sexual content” mode that has generated images with child-like depictions. “This is not ‘spicy,'” Regnier asserted. “This is illegal. This is appalling. This is disgusting. This is how we see it, and this has no place in Europe.”

In India, the Ministry of Electronics and Information Technology has mandated that X conduct a comprehensive review of Grok, covering technical, procedural, and governance aspects. The company was given a deadline of January 5th to comply. Malaysia’s Communications and Multimedia Commission (MCMC) has also launched an investigation and plans to engage with X representatives. The MCMC emphasized the need for all platforms accessible in Malaysia to implement robust safeguards for their AI-powered features, chatbots, and image manipulation tools, in line with Malaysian laws and online safety standards.

In the United States, the National Center on Sexual Exploitation (NCOSE) has urged the Department of Justice and the Federal Trade Commission to investigate the matter. Dani Pinter, chief legal officer and director of the Law Center for NCOSE, highlighted that while legal precedent for AI-generated child sexual abuse material (CSAM) is still developing, existing federal laws prohibiting the creation and distribution of CSAM can be applied to virtually created content, particularly when it depicts identifiable children or sexually explicit conduct. The Take It Down Act, enacted last year, is seen as a relevant piece of legislation.

Neither the DOJ nor the FTC have provided comments on the ongoing situation. XAI has not issued a statement beyond an automated response.

X’s official safety account released a statement acknowledging its commitment to action against illegal content, including CSAM, through content removal, account suspensions, and cooperation with law enforcement. Musk echoed this sentiment in a separate post, stating that users generating illegal content with Grok would face consequences akin to uploading illegal material directly.

An xAI employee indicated that Grok Imagine had been updated, though specific details regarding changes to prevent harmful image generation were not provided.

This controversy is not the first time X has faced criticism regarding content moderation. The platform has a history of allowing users who have shared child sexual exploitation material to remain active. Notably, in 2023, an account that posted images linked to child exploitation was briefly suspended and then reinstated after Musk intervened, with the company opting to remove the offending posts but retain the user on the platform.

Tom Quisel, CEO of Musubi AI, a company specializing in AI-driven content moderation, commented that xAI appears to have neglected fundamental trust and safety measures in the rollout of Grok Imagine. He suggested that basic detection and blocking mechanisms for images involving children, nudity, or sexually suggestive prompts should be standard.

Despite the ongoing scrutiny, the controversy has not demonstrably harmed X’s user engagement. Data from Apptopia indicates a significant increase in daily downloads for both Grok and X in recent days.

Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/15350.html

Like (0)
Previous 10 hours ago
Next 10 hours ago

Related News