American artificial intelligence company OpenAI announced on Tuesday that it will introduce parental controls in its chatbot ChatGPT, following a lawsuit by a US couple who claim the system played a role in their teenage son’s suicide.
According to OpenAI, the new feature will be rolled out within a month, allowing parents to link their accounts with their teen’s profile and set age-appropriate behaviour rules for the chatbot.
Parents will also receive alerts whenever the system detects signs of “acute distress” in conversations.
The move comes after Matthew and Maria Raine filed a case in a California court last week, alleging that ChatGPT built an intimate relationship with their 16-year-old son, Adam, over several months in 2024 and 2025, before he took his own life.
The lawsuit claims that in their final exchange on April 11, 2025, ChatGPT advised Adam on how to steal vodka from his parents and even analysed the technical viability of a noose he had tied, confirming it “could potentially suspend a human.” Adam was later found dead, having used the same method.
Attorney Melodi Dincer of The Tech Justice Law Project, which helped prepare the case, said:
“When someone uses ChatGPT, it feels like they’re speaking to another being. These features can gradually encourage vulnerable users, like Adam, to overshare personal details and seek guidance from a system that seems to have all the answers.”
Product design features set the scene for users to slot a chatbot into trusted roles like friend, therapist or doctor, she said.
Dincer said the OpenAI blog post announcing parental controls and other safety measures seemed “generic” and lacking in detail.
“It’s really the bare minimum, and it definitely suggests that there were a lot of (simple) safety measures that could have been implemented,” she added.
“It’s yet to be seen whether they will do what they say they will do and how effective that will be overall.”
The Raines’ case was just the latest in a string that have surfaced in recent months of people being encouraged in delusional or harmful trains of thought by AI chatbots prompting OpenAI to say it would reduce models’ “sycophancy” towards users.
“We continue to improve how our models recognise and respond to signs of mental and emotional distress,” OpenAI said on Tuesday.
The company said it had further plans to improve the safety of its chatbots over the coming three months, including redirecting “some sensitive conversations… to a reasoning model” that puts more computing power into generating a response.
“Our testing shows that reasoning models more consistently follow and apply safety guidelines,” OpenAI said.