Age Verification
-
Character.AI to Block Romantic AI Chats for Minors
Character.AI will eliminate open-ended chats for users under 18 to bolster safety amid rising concerns about AI companions’ impact on vulnerable youth. The decision follows scrutiny, including a tragic suicide linked to the platform. The company will implement age verification and restrict chat functionality by November 25th. Character.AI is also establishing an AI Safety Lab and faces increased regulatory pressure. Other AI developers, including OpenAI and Meta, face similar concerns, prompting industry-wide safety reevaluation.
-
Global Movement Fuels AI Safety Tech Wave for Kids Online
The global push for online child safety is driving AI-powered solutions and regulatory scrutiny. The UK’s Online Safety Act and similar US legislation compel tech firms to protect minors from harmful content, with hefty penalties for non-compliance. Companies like Yoti are developing age verification technologies, raising privacy concerns. HMD Global’s Fusion X1 smartphone uses AI to block explicit content. The industry faces pressure to balance child protection with user privacy, requiring ethical implementation and responsible technology development.