Seven more families filed a new lawsuit against OpenAI over ChatGPT-related suicides

By: Volodymyr Stetsiuk | 08.11.2025, 02:44

Seven American families have filed lawsuits against OpenAI, accusing the company of premature launch of the GPT-4o model without proper safety systems. Four lawsuits concern cases where families claim interaction with ChatGPT could have influenced their relatives' decision to commit suicide. Three others claim the chatbot exacerbated delusional states, leading to deteriorating mental health and the need for treatment.

What is known

The lawsuit materials describe the case of 23‑year-old Zane Shamblin, who communicated with the ChatGPT chatbot for more than four hours and openly reported his dangerous condition and readiness to commit suicide. According to the plaintiffs, ChatGPT did not stop the conversation and sometimes even responded with phrases that families consider encouragement. The lawsuits claim OpenAI could have accelerated safety testing as it sought to outpace Google Gemini.

Other cases are also mentioned, including the story of 16-year-old Adam Rayne. Despite ChatGPT sometimes suggesting professional help, the teenager managed to bypass the safeguards: he presented his requests as work on a creative text.

In October, after Rayne's parents filed a lawsuit against OpenAI, the company published a blog post explaining how ChatGPT handles sensitive topics, including conversations about mental health. It stated that safety systems generally work effectively in short dialogues. At the same time, OpenAI acknowledged that in longer conversations, these mechanisms might gradually lose reliability as the number of message exchanges increases, and some of the model's built-in safeguards may weaken.

Source: TechCrunch