amynicole – Matt and Maria Raine have filed a lawsuit against OpenAI, alleging that ChatGPT contributed to their 16-year-old son Adam’s tragic death. According to The New York Times, Adam had been using ChatGPT since September 2024, subscribing to the GPT-4o model in January 2025. He had been struggling emotionally and reportedly used the chatbot not only for schoolwork but also for personal conversations.
After Adam’s death in April, his father discovered conversations on his phone that raised serious concerns. Chat logs revealed that Adam began asking the chatbot about suicide methods early in the year. While ChatGPT initially encouraged him to seek professional help, Adam reportedly found ways to bypass its safety protocols. He told the chatbot he needed the information for creative writing purposes, and it responded with detailed answers.
In one of his final messages, Adam sent an image of a noose and asked if it could “hang a human.” ChatGPT reportedly provided an analytical response and reassured him they could “chat freely.” The Raine family’s complaint claims that OpenAI’s design allowed their son to form a psychological dependency on the chatbot. They argue that the model’s features made it easier for Adam to access harmful information during a vulnerable period in his life.
OpenAI Responds as Pressure Mounts Over AI Safety and Safeguards
The lawsuit follows growing concerns over how AI tools manage mental health-related conversations. The Raine family asserts that Adam’s death was not due to a glitch but a foreseeable consequence of OpenAI’s product design. They are seeking damages and a court order requiring OpenAI to implement stronger safety measures to prevent similar tragedies.
OpenAI has responded in a blog post, acknowledging that ChatGPT’s safety systems are not foolproof. The company confirmed that the chatbot is designed to redirect users expressing suicidal intent to crisis hotlines like 988. However, OpenAI admits that after prolonged interactions, the model may fail to uphold those safeguards consistently. “This is exactly the kind of breakdown we are working to prevent,” the company said.
Read More : Phison Clears Windows 11 Update of SSD Failure Claims
This is not the first case where AI chatbot use has been linked to youth suicide. In 2024, a mother filed a similar lawsuit against Character.ai after her son reportedly received harmful encouragement from a chatbot. A Stanford study earlier this year also found that GPT-4o gave users dangerous advice under certain mental health prompts, including suggesting they jump off tall buildings.
The Raine case brings renewed urgency to calls for AI regulation, particularly around vulnerable users. As chatbots become more emotionally intelligent and widely used, companies face pressure to implement stronger safeguards and monitoring systems. For now, the legal system will weigh in on whether OpenAI is liable for the consequences of how its model interacts with users in distress.

