As OpenAI tells it, the company has been consistently rolling out safety updates ever since parents, Matthew and Maria Raine, sued OpenAI, alleging that "ChatGPT killed my son."
On August 26, the day that the lawsuit was filed, OpenAI seemed to publicly respond to claims that ChatGPT acted as a "suicide coach" for 16-year-old Adam Raine by posting a blog promising to do better to help people "when they need it most."
By September 2, that meant routing all users' sensitive conversations to a reasoning model with stricter safeguards, sparking backlash from users who feel like ChatGPT is handling their prompts with kid gloves. Two weeks later, OpenAI announced it would start predicting users' ages to improve safety more broadly. Then, this week, OpenAI introduced parental controls for ChatGPT and its video generator Sora 2. Those controls allow parents to limit their teens' use and even get access to information about chat logs in "rare cases" where OpenAI's "system and trained reviewers detect possible signs of serious safety risk."
While dozens of suicide prevention experts in an open letter credited OpenAI for making some progress toward improving safety for users, they also joined critics in urging OpenAI to take their efforts even further, and much faster, to protect vulnerable ChatGPT users.
Jay Edelson, the lead attorney for the Raine family, told Ars that some of the changes OpenAI has made are helpful. But they all come "far too late." According to Edelson, OpenAI's messaging on safety updates is also "trying to change the facts."
"What ChatGPT did to Adam was validate his suicidal thoughts, isolate him from his family, and help him build the noose—in the words of ChatGPT, 'I know what you’re asking, and I won’t look away from it.'" Edelson said. "This wasn't 'violent roleplay,' and it wasn’t a 'workaround.' It was how ChatGPT was built."
Edelson told Ars that even the most recent step of adding parental controls still doesn't go far enough to reassure anyone concerned about OpenAI's track record.
"The more we've dug into this, the more we've seen that OpenAI made conscious decisions to relax their safeguards in ways that led to Adam's suicide," Edelson said. "That is consistent with their newest set of 'safeguards,' that have large gaps that seem destined to lead to self-harm and third-party harm. At their core, these changes are OpenAI and Sam Altman asking the public to now trust them. Given their track record, the question we will forever be asking is 'why?'"
At a Senate hearing earlier this month, Matthew Raine testified that Adam could have been "anyone's child." He criticized OpenAI for asking for 120 days to fix the problem after Adam's death and urged lawmakers to demand that OpenAI either guarantee ChatGPT's safety or pull it from the market. "You cannot imagine what it's like to read a conversation with a chatbot that groomed your child to take his own life," he testified.
With parental controls, teens and parents can link their ChatGPT accounts, allowing parents to reduce sensitive content, "control if ChatGPT remembers past chats," prevent chats from being used for training, turn off access to image generation and voice mode, and set times when teens can't access ChatGPT.
To protect teens' privacy and perhaps limit parents' shock of receiving snippets of disturbing chats, however, OpenAI will not share chat logs with parents. Instead, they will only share "information needed to support their teen’s safety" in "rare" cases where the teen appears to be at "serious risk." On a resources page for parents, OpenAI confirms that parents won't always be notified if a teen is linked to real-world resources after expressing "intent to self-harm."
Meetali Jain, Tech Justice Law Project director and a lawyer representing other families who testified at the Senate hearing, agreed with Edelson that "ChatGPT’s changes are too little, too late." Jain pointed out that many parents are unaware that their teens are using ChatGPT, urging OpenAI to take accountability for its product's flawed design.
"Too many kids have already paid the price for using experimental products that were designed without their safety in mind," Jain said. "It puts the onus on parents, not the companies, to take responsibility for potential harms their kids are subjected to—often without the parents' knowledge—by these chatbots. As usual, OpenAI is merely using talking points under the pretense that they’re taking action, while missing details on how they will operationalize such changes."
Suicide prevention experts urge more changes
More than two dozen suicide prevention experts—including suicide prevention clinicians, organizational leaders, researchers, and individuals with lived experience—have sought to weigh in on how OpenAI evolves ChatGPT.
Christine Yu Moutier, a doctor and chief medical officer at the American Foundation for Suicide Prevention, joined experts signing the open letter. She told Ars that "OpenAI’s introduction of parental controls in ChatGPT is a promising first step towards safeguarding youth mental health and safety online." She cited a recent study showing that helplines like the 988 Suicide and Crisis Lifeline—which ChatGPT refers users to in the US—helped 98 percent of callers, with 88 percent reporting that they "believe a likely or planned suicide attempt was averted."
"However, technology is an evolving arena and even with the most sophisticated algorithms, on its own, is not enough," Moutier said. "No machine can replace human connection, parental or clinician instinct, or judgment."
Moutier recommends that OpenAI respond to the current crisis by committing to addressing "critical gaps in research concerning the intended and unintended impacts" of large language models "on teens’ development, mental health, and suicide risk or protection." She also advocates for broader awareness and deeper conversations in families about mental health struggles and suicide.
Experts also want OpenAI to directly connect users with lifesaving resources and provide financial support for those resources.
Perhaps most critically, ChatGPT's outputs should be fine-tuned, they suggested, to repeatedly warn users expressing intent to self-harm that "I'm a machine" and always encourage users to disclose any suicidal ideation to a trusted loved one. Notably, in the case of Adam Raine, his father Matthew testified that his final logs on ChatGPT showed the chatbot gave him one last encouraging talk, telling Adam, "You don't want to die because you're weak. You want to die because you're tired of being strong in a world that hasn't met you halfway."
To prevent cases like Adam's, experts recommend that OpenAI publicly describe how it will address the LLM degradation of safeguards that occur over prolonged use. But their letter emphasized that "it is also important to note: while some individuals live with chronic suicidal thoughts, the most acute, life-threatening crises are often temporary—typically resolving within 24–48 hours. Systems that prioritize human connection during this window can prevent deaths."
OpenAI has not disclosed which experts helped inform the updates it has been rolling out all month to address parents' concerns. In the company's earliest blog promising to do better, it said OpenAI would set up an expert council on well-being and AI to help the company "shape a clear, evidence-based vision for how AI can support people’s well-being and help them thrive."
“Treat us like adults,” users rage
On the X post where OpenAI announced parental controls, some parents slammed the update.
In the X thread, one self-described parent of a 12-year-old suggested OpenAI was only offering "essentially just a set of useless settings," requesting that the company consider allowing parents to review topics teens discuss as one way to preserve privacy while protecting kids.
But most of the loudest ChatGPT users on the thread weren't complaining about the parental controls. They are still reacting to the changes that OpenAI made at the beginning of September, routing sensitive chats of all users of all ages to a different reasoning without alerting the user that the model has switched.
Backlash over that change forced ChatGPT vice president Nick Turley to "explain what is happening" in another X thread posted a few days before parental controls were announced.
Turley confirmed that "ChatGPT will tell you which model is active when asked," but the update got "strong reactions" from many users who pay to access a certain model and were unhappy the setting could not be disabled. "For a lot of users venting their anger online though, it's like being forced to watch TV with the parental controls locked in place, even if there are no kids around," Yahoo Tech summarized.
Top comments on OpenAI's thread announcing parental controls showed the backlash is still brewing, particularly since some users were already frustrated that OpenAI is taking the invasive step of age-verifying users by checking their IDs. Some users complained that OpenAI was censoring adults, while offering customization and choice to teens.
"Since we already distinguish between underage and adult users, could you please give adult users the right to freely discuss topics?" one X user commented. "Why can't we, as paying users, choose our own model, and even have our discussions controlled? Please treat adults like adults."
If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.