OpenAI Introduces Age Prediction for ChatGPT Consumers

OpenAI is rolling out an age prediction model to ChatGPT to enhance user safety, especially for minors. The system analyzes account data and behavior to identify users under 18, triggering stricter safety measures and content limitations. This initiative addresses growing regulatory scrutiny and legal challenges, including an FTC investigation and lawsuits concerning AI’s impact on young users. An identity verification service, Persona, allows users to correct misclassifications. This follows recent safety updates, including parental controls and a mental health advisory council, with an initial EU launch planned soon.

OpenAI is introducing an age prediction model to its ChatGPT consumer offerings, a move designed to bolster user safety, particularly for minors. This new feature aims to identify accounts belonging to users under 18 by analyzing a combination of account-level and behavioral data. These signals include historical usage patterns, account tenure, typical activity times, and self-reported age.

This initiative comes amid increasing regulatory scrutiny and legal challenges facing the artificial intelligence company. OpenAI, along with other major tech players, is under investigation by the Federal Trade Commission (FTC) regarding the potential impact of AI chatbots on children and adolescents. Furthermore, the company is involved in several wrongful death lawsuits, including one that alleges the company’s AI played a role in a teenage user’s suicide.

Once the age prediction model flags a user as potentially under 18, ChatGPT will automatically implement enhanced safety measures. These measures are intended to limit exposure to sensitive content, such as depictions of self-harm. For users who are incorrectly identified, OpenAI has integrated an identity verification service called Persona, which allows them to regain full access to the platform.

Persona is a tool also utilized by other technology platforms, such as Roblox, which has itself faced pressure from lawmakers to strengthen child protection protocols on its services.

This age prediction system follows a series of recent safety enhancements from OpenAI. In August, the company announced plans for parental controls, which were rolled out the following month, alongside the development of the age prediction technology. In October, OpenAI established a council of eight experts to advise on the complex interplay between AI and user mental health, emotional well-being, and motivation.

The age prediction model is slated for an initial rollout in the European Union in the coming weeks, a move intended to align with regional regulatory requirements. OpenAI has stated its commitment to continuously refining the accuracy of this model over time.

Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/16366.html

Like (0)
Previous 1 hour ago
Next 1 hour ago

Related News