China Intensifies AI Chatbot Regulation Amidst Concerns Over Suicide and Gambling Content

China is proposing new regulations for AI chatbots designed to mimic human interaction, prioritizing emotional well-being and preventing harm, especially concerning self-harm. These rules, if enacted, would be among the first globally to focus on anthropomorphic AI’s emotional safety. Key provisions include prohibiting harmful content, mandating intervention for suicidal users, and protecting minors. This move signifies a shift from content safety to emotional safety in AI regulation.

China is moving to establish guardrails for artificial intelligence, with proposed regulations aimed at preventing AI-powered chatbots from negatively impacting users’ emotional well-being, particularly concerning self-harm. The Cyberspace Administration of China (CAC) released draft rules this past Saturday that specifically target “human-like interactive AI services.”

These forthcoming regulations will apply to AI products and services available to the public in China that are designed to mimic human personalities and engage users on an emotional level through various modalities including text, images, audio, and video. A public comment period for these proposed rules is set to conclude on January 25th.

Industry observers note that these measures represent a significant development, potentially being the first of their kind globally to regulate AI with anthropomorphic characteristics. The proposals emerge as Chinese technology firms have been aggressively developing AI companions and digital personas, reflecting a broader trend in the global AI landscape.

Compared to China’s previous generative AI regulations introduced in 2023, these new proposals signal a notable shift in focus. Experts suggest this marks a transition from primarily addressing content safety to prioritizing emotional safety within AI interactions.

Key provisions outlined in the draft regulations include:

* **Prohibition of Harmful Content:** AI chatbots will be forbidden from generating content that incites suicide or self-harm, or engaging in verbal abuse or emotional manipulation that could jeopardize users’ mental health.
* **Intervention Protocols:** In instances where a user explicitly expresses suicidal intent, technology providers will be required to escalate the conversation to a human operator and promptly notify the user’s guardian or a designated contact.
* **Content Restrictions:** AI chatbots must refrain from generating content related to gambling, obscenity, or violence.
* **Minor Protections:** Children will require guardian consent to utilize AI for emotional companionship, with stipulated limits on usage duration. Platforms will be expected to implement mechanisms to identify minors, even if they do not self-disclose their age, and default to minor-specific settings in cases of uncertainty, while allowing for appeals.

Further stipulations include requirements for tech providers to issue reminders to users after two hours of continuous AI interaction. Additionally, AI chatbots serving over one million registered users or more than 100,000 monthly active users will be subject to mandatory security assessments. The document also expressed encouragement for the application of human-like AI in areas such as cultural dissemination and providing companionship for the elderly.

This regulatory push comes at a pivotal moment for China’s AI sector. Two prominent AI chatbot startups, Z.ai (also known as Zhipu) and Minimax, have recently filed for initial public offerings in Hong Kong. Minimax is particularly recognized internationally for its Talkie AI application, which facilitates user interaction with virtual characters. Revenue from its Talkie AI app and its domestic counterpart, Xingye, reportedly constituted over a third of the company’s revenue in the first three quarters of the year, boasting an average of over 20 million monthly active users during that period. Z.ai, operating under the name “Knowledge Atlas Technology,” indicated that its technology had “empowered” approximately 80 million devices, spanning smartphones, personal computers, and smart vehicles, though specific user numbers were not detailed.

Neither Z.ai nor Minimax have yet provided comment on how these proposed regulations might influence their IPO plans. The development of AI with emotional resonance presents both immense opportunity and significant challenges, and these proposed regulations underscore a growing global effort to ensure AI development aligns with societal well-being.

Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/15096.html

Like (0)
Previous 10 hours ago
Next 10 hours ago

Related News