“`html
OpenAI is taking proactive steps to address growing concerns surrounding the potential impact of artificial intelligence on mental health and well-being. The company announced Tuesday the formation of an Expert Council on Well-Being and AI, comprised of eight specialists from various fields including psychology, psychiatry, human-computer interaction, and digital wellness. This initiative signals OpenAI’s commitment to responsible AI development, particularly as its technologies become increasingly integrated into daily life.
The Expert Council will initially focus on providing guidance for OpenAI’s flagship chatbot, ChatGPT, and its text-to-video generation tool, Sora. The council’s primary objective is to define healthy AI interactions by leveraging their expertise through regular meetings and consultations. This will help OpenAI refine its products and ensure they promote user well-being, rather than detract from it.
This move comes as OpenAI faces increased scrutiny from regulators and the public regarding the potential negative effects of AI, especially on vulnerable populations. In recent months, the company has been implementing enhanced safety controls, including an age prediction system designed to automatically apply appropriate settings for users under 18. Parental controls were also introduced, allowing parents to receive notifications if their child exhibits signs of distress while using OpenAI’s services.
The timing of the Expert Council announcement is noteworthy. The Federal Trade Commission (FTC) launched an inquiry in September into several tech companies, including OpenAI, regarding the potential impact of chatbots on children and teenagers. Additionally, OpenAI is currently facing a wrongful death lawsuit concerning the alleged role of ChatGPT in a teenage suicide. These factors underscore the urgent need for OpenAI to demonstrate its commitment to safety and ethical AI development.
Before formalizing the council, OpenAI consulted informally with some of its members during the development of its parental controls. The company also brought in additional experts in psychiatry, psychology, and human-computer interaction. The creation of the Expert Council can be seen as a strategic move to establish credibility and transparency in this sensitive area. It also highlights the complexity of the issues surrounding AI and mental health, necessitating input from diverse disciplines.
In addition to the Expert Council, OpenAI is collaborating with researchers and mental health professionals within the Global Physician Network to test ChatGPT and refine its policies. This multi-pronged approach demonstrates OpenAI’s understanding that ensuring user well-being requires continuous evaluation and adaptation.
Here are the members of OpenAI’s Expert Council on Well-Being and AI:
- Andrew Przybylski, a professor of human behavior and technology at the University of Oxford.
- David Bickham, a research scientist in the Digital Wellness Lab at Boston Children’s Hospital.
- David Mohr, the director of Northwestern University’s Center for Behavioral Intervention Technologies.
- Mathilde Cerioli, the chief scientist at Everyone.AI, a nonprofit that explores the risks and benefits of AI for children.
- Munmun De Choudhury, a professor at Georgia Tech’s School of Interactive Computing.
- Dr. Robert Ross, a pediatrician by training and the former CEO of The California Endowment, a nonprofit that aims to expand access to affordable health care.
- Dr. Sara Johansen, a clinical assistant professor at Stanford University who founded its Digital Mental Health Clinic.
- Tracy Dennis-Tiwary, a professor of psychology at Hunter College.
“`
Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/10887.html