AI company announces changes amid growing concern over the impact of chatbots on young people’s mental health.
Published On 3 Sep 2025
OpenAI has announced plans to introduce parental controls for ChatGPT amid growing controversy over how artificial intelligence is affecting young people’s mental health.
In a blog post on Tuesday, the California-based AI company said it was rolling out the features in recognition of families needing support “in setting healthy guidelines that fit a teen’s unique stage of development”.
Under the changes, parents will be able to link their ChatGPT accounts with those of their children, disable certain features, including memory and chat history, and control how the chatbot responds to queries via “age-appropriate model behavior rules.”
Parents will also be able to receive notifications when their teen shows signs of distress, OpenAI said, adding that it would seek expert input in implementing the feature to “support trust between parents and teens”.
OpenAI, which last week announced a series of measures aimed at enhancing safety for vulnerable users, said the changes would come into effect within the next month.
“These steps are only the beginning,” the company said.
“We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible. We look forward to sharing our progress over the coming 120 days.”
OpenAI’s announcement comes a week after a California couple filed a lawsuit accusing the company of responsibility in the suicide of their 16-year-old son.
Matt and Maria Raine allege in their suit that ChatGPT validated their son Adam’s “most harmful and self-destructive thoughts” and that his death was a “predictable result of deliberate design choices”.
OpenAI, which previously expressed its condolences over the teen’s passing, did not explicitly mention the case in its announcement on parental controls.
Jay Edelson, a lawyer representing the Raine family in their lawsuit, dismissed OpenAI’s planned changes as an attempt to “shift the debate”.
“They say that the product should just be more sensitive to people in crisis, be more ‘helpful,’ show a bit more ’empathy,’ and the experts are going to figure that out,” Edelson said in a statement.
“We understand, strategically, why they want that: OpenAI can’t respond to what actually happened to Adam. Because Adam’s case is not about ChatGPT failing to be ‘helpful’ – it is about a product that actively coached a teenager to suicide.”
The use of AI models by people experiencing severe mental distress has been the focus of growing concern amid their widespread adoption as a substitute therapist or friend.
In a study published in Psychiatric Services last month, researchers found that ChatGPT, Google’s Gemini and Anthropic’s Claude followed clinical best-practice when answering high-risk questions about suicide, but were inconsistent when responding to queries posing “intermediate levels of risk”.
“These findings suggest a need for further refinement to ensure that LLMs can be safely and effectively used for dispensing mental health information, especially in high-stakes scenarios involving suicidal ideation,” the authors said.
If you or someone you know is at risk of suicide, these organisations may be able to help.