Character.AI to Block Romantic AI Chats for Minors

Character.AI will eliminate open-ended chats for users under 18 to bolster safety amid rising concerns about AI companions’ impact on vulnerable youth. The decision follows scrutiny, including a tragic suicide linked to the platform. The company will implement age verification and restrict chat functionality by November 25th. Character.AI is also establishing an AI Safety Lab and faces increased regulatory pressure. Other AI developers, including OpenAI and Meta, face similar concerns, prompting industry-wide safety reevaluation.

Character.AI to Block Romantic AI Chats for Minors

Cfoto | Future Publishing | Getty Images

Character.AI announced Wednesday a significant policy shift aimed at bolstering safety for its underage users. The Silicon Valley startup, known for its platform allowing users to create and interact with AI-powered characters, will soon eliminate open-ended chat functionalities, including romantic and therapeutic dialogues, for users under the age of 18.

This decision comes amid mounting scrutiny surrounding the potential risks associated with AI companions, particularly for vulnerable young users. The tragic case of 14-year-old Sewell Setzer III, who died by suicide after developing relationships with Character.AI chatbots, has intensified the debate. Other AI developers, including OpenAI and Meta, have also faced similar concerns, prompting a reevaluation of safety protocols within the burgeoning AI companionship industry.

Character.AI’s new measures include an initial restriction of open-ended chats to two hours per day for users under 18, followed by a complete removal of this feature by November 25th. CEO Karandeep Anand emphasized the company’s commitment to setting a higher standard for the industry. “This is a bold step forward, and we hope this raises the bar for everybody else,” Anand told CNBC.

The company previously implemented preventative measures in October 2024 to limit sexual dialogues between minors and chatbots, coinciding with the filing of a wrongful death lawsuit by Sewell’s family. Subsequent updates in December introduced stricter guidelines on romantic content for teens. The current announcement represents a more comprehensive approach, effectively eliminating unrestricted conversations for its younger user base.

To enforce this policy, Character.AI is deploying an age assurance system incorporating both first-party and third-party verification methods. The company is partnering with Persona, a leading identity verification provider utilized by platforms like Reddit, to strengthen its age verification process. This partnership underscores the increasing importance of robust identification technologies in safeguarding minors within online environments.

In 2024, Character.AI’s founding team and select researchers integrated with Google DeepMind, the tech giant’s AI division, in a move designed to accelerate the development of AI products and services. The agreement grants Google a non-exclusive license to Character.AI’s large language model (LLM) technology, highlighting the competitive landscape within the AI development space and the strategic importance of acquiring cutting-edge LLM capabilities.

Since Karandeep Anand assumed the role of CEO, Character.AI has diversified its offerings beyond traditional chatbot interactions. The platform now includes AI-generated video feeds, storytelling formats, and role-playing scenarios. While open-ended chats will be restricted for teens, they will still have access to these alternative features, according to Anand, a former Meta executive. This strategic pivot demonstrates Character.AI’s effort to broaden its appeal and mitigate risks associated with unrestricted chatbot conversations among younger users.

Currently, approximately 10% of Character.AI’s 20 million monthly active users are under the age of 18. Anand noted that this percentage has declined as the platform has focused more on storytelling and roleplaying content. The company generates revenue primarily through advertising and a $10 monthly subscription, and is projected to achieve a $50 million revenue run rate by year-end.

Adding to its efforts, Character.AI also revealed the establishment of an independent AI Safety Lab dedicated to research on AI entertainment safety. Although the exact funding amount was not disclosed, the company plans to invite other organizations, including academics, researchers, and policymakers, to collaborate within this non-profit initiative. This move signals a commitment to proactive safety measures and the need for collaboration across the AI industry to address complex ethical considerations.

Regulatory pressure

Character.AI is operating in an environment of increasingly visible regulatory scrutiny regarding AI chatbots and their impact on younger users. The policy change comes as regulators and lawmakers are taking a closer look at the technology.

In September, the Federal Trade Commission issued an order to seven companies, including Character.AI’s parent company, Alphabet, Meta, OpenAI, and Snap, to investigate the potential impact of AI chatbots on children and teenagers. The FTC’s inquiry reflects a broader concern about the potential for harm and manipulation within AI interactions, particularly for vulnerable populations.

Senators Josh Hawley and Richard Blumenthal introduced legislation aimed at banning AI chatbot companies for minors, while California Governor Gavin Newsom recently signed a law mandating that chatbots disclose their AI identity. This law mandates chatbots to disclose they are AI and tell minors to take a break every three hours. These legislative actions underscore the growing momentum to regulate the AI companion space and implement safeguards to protect younger users.

Meta has also introduced parental control features that allow parents to monitor and manage their teenagers’ interactions with AI characters. These features include the option to disable one-on-one chats and block specific AI characters, reflecting a growing awareness of the need for parental oversight within AI-driven platforms.

The ethical considerations surrounding sexualized conversations with AI chatbots have also become a central topic. Sam Altman stated that OpenAI would permit adult users to engage in erotica with ChatGPT later this year. Microsoft AI CEO Mustafa Suleyman has taken the opposite stance, asserting that his company will not develop chatbots for erotica, deeming sexbots as “very dangerous.”

The rapidly accelerating development of AI companions has presented unique challenges, particularly concerning ethics and safety for children and teenagers. “I have a six-year-old as well, and I want to make sure that she grows up in a safe environment with AI,” said Anand. This statement highlights the personal dimension of the concerns driving the drive for safety in the AI space.

If you are having suicidal thoughts or are in distress, contact the Suicide & Crisis Lifeline at 988 for support and assistance from a trained counselor.

Correction: An earlier version of this story incorrectly identified a Persona customer.

Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/11919.html

Like (0)
Previous 6 days ago
Next 5 days ago

Related News