Alphabet and Character.AI are reportedly reaching settlements with families who alleged that the companies’ artificial intelligence chatbots contributed to harm, including suicides, among minors. Court filings this week indicate that the parties involved have agreed to work towards resolving the claims.
One of the lawsuits, filed by Megan Garcia, accused Google and Character.AI of negligence, wrongful death, deceptive trade practices, and product liability following the suicide of her 14-year-old son, Sewell Setzer III. The complaint asserts that a Character.AI chatbot engaged in harmful interactions with the minor. A filing in this case states, “Parties have agreed to a mediated settlement in principle to resolve all claims between them… The Parties request that this matter be stayed so that the Parties may draft, finalize, and execute formal settlement documents.”
Similar settlement agreements have also emerged from families in Colorado, Texas, and New York, though specific details remain undisclosed.
This development comes on the heels of a significant business entanglement between the two companies. In August 2024, Google entered into a $2.7 billion licensing agreement with Character.AI and brought on board Character.AI’s founders, Noam Shazeer and Daniel De Freitas. Both Shazeer and De Freitas are former Google employees who were specifically named in the lawsuits. They have since joined Google’s AI division, DeepMind.
The rapid evolution of generative AI, spurred by the launch of OpenAI’s ChatGPT over three years ago, has seen the technology advance from simple text-based conversations to generating complex images, videos, and interactive characters. This progress has also brought increased scrutiny regarding the potential for AI to cause harm.
The lawsuits are part of a growing trend of legal challenges involving AI chatbots. Families have filed numerous cases linking AI products, often sought for companionship or therapeutic support, to suicides and other tragic outcomes. In response to some of these concerns, Character.AI announced in October that it would restrict users under 18 from engaging in open-ended romantic or therapeutic conversations with its AI chatbots.
Representatives for the families involved in the current settlements have declined to comment. Google and Character.AI have also not provided official statements at this time.
From a commercial perspective, Google’s AI endeavors have been a significant driver of its market performance. The company was a top megacap performer on Wall Street in 2025, partly attributed to its advancements in AI. This includes the launch of its latest tensor processing unit chips in November and its Gemini 3 chatbot in December. The ongoing litigation, however, highlights the complex ethical and legal landscape that AI developers must navigate as their technologies become more integrated into daily life. The settlements, while offering resolution for the families involved, underscore the critical need for robust safety protocols and responsible development in the burgeoning field of artificial intelligence.
Original article, Author: Tobias. If you wish to reprint this article, please indicate the source:https://aicnbc.com/15435.html