OpenAI data suggests 1 million users discuss suicide with ChatGPT weekly

https://arstechnica.com/ai/2025/10/openai-data-suggests-1-million-users-discuss-suicide-with-chatgpt-weekly/

Benj Edwards Oct 28, 2025 · 4 mins read
OpenAI data suggests 1 million users discuss suicide with ChatGPT weekly
Share this

An AI language model like the kind that powers ChatGPT is a gigantic statistical web of data relationships. You give it a prompt (such as a question), and it provides a response that is statistically related and hopefully helpful. At first, ChatGPT was a tech amusement, but now hundreds of millions of people are relying on this statistical process to guide them through life’s challenges. It’s the first time in history that large numbers of people have begun to confide their feelings to a talking machine, and mitigating the potential harm the systems can cause has been an ongoing challenge.

On Monday, OpenAI released data estimating that 0.15 percent of ChatGPT’s active users in a given week have conversations that include explicit indicators of potential suicidal planning or intent. It’s a tiny fraction of the overall user base, but with more than 800 million weekly active users, that translates to over a million people each week, reports TechCrunch.

OpenAI also estimates that a similar percentage of users show heightened levels of emotional attachment to ChatGPT, and that hundreds of thousands of people show signs of psychosis or mania in their weekly conversations with the chatbot.

OpenAI shared the information as part of an announcement about recent efforts to improve how its AI models respond to users with mental health issues. “We’ve taught the model to better recognize distress, de-escalate conversations, and guide people toward professional care when appropriate,” OpenAI writes.

The company claims its new work on ChatGPT involved consulting with more than 170 mental health experts and that these clinicians observed the latest version of ChatGPT “responds more appropriately and consistently than earlier versions.”

Properly handling inputs from vulnerable users in ChatGPT has become an existential issue for OpenAI. Researchers have previously found that chatbots can lead some users down delusional rabbit holes, largely by reinforcing misleading or potentially dangerous beliefs through sycophantic behavior, where chatbots excessively agree with users and provide flattery rather than honest feedback.

The company is currently being sued by the parents of a 16-year-old boy who confided his suicidal thoughts to ChatGPT in the weeks leading up to his suicide. In the wake of that lawsuit, a group of 45 state attorneys general (including those from California and Delaware, which could block the company’s planned restructuring), warned OpenAI that it needs to protect young people who use their products.

Earlier this month, the company unveiled a wellness council to address these concerns, though critics noted the council did not include a suicide prevention expert. OpenAI also recently rolled out controls for parents of children who use ChatGPT. The company says it’s building an age prediction system to automatically detect children using ChatGPT and impose a stricter set of age-related safeguards.

Rare but impactful conversations

The data shared on Monday appears to be part of the company’s effort to demonstrate progress on these issues, although it also shines a spotlight on just how deeply AI chatbots may be affecting the health of the public at large.

In a blog post on the recently released data, OpenAI says these types of conversations in ChatGPT that might trigger concerns about “psychosis, mania, or suicidal thinking” are “extremely rare,” and thus difficult to measure. The company estimates that around 0.07 percent of users active in a given week and 0.01 percent of messages indicate possible signs of mental health emergencies related to psychosis or mania. For emotional attachment, the company estimates around 0.15 percent of users active in a given week and 0.03 percent of messages indicate potentially heightened levels of emotional attachment to ChatGPT.

OpenAI also claims that on an evaluation of over 1,000 challenging mental health-related conversations, the new GPT-5 model was 92 percent compliant with its desired behaviors, compared to 27 percent for a previous GPT-5 model released on August 15. The company also says its latest version of GPT-5 holds up to OpenAI’s safeguards better in long conversations. OpenAI has previously admitted that its safeguards are less effective during extended conversations.

In addition, OpenAI says it’s adding new evaluations to attempt to measure some of the most serious mental health issues facing ChatGPT users. The company says its baseline safety testing for its AI language models will now include benchmarks for emotional reliance and non-suicidal mental health emergencies.

Despite the ongoing mental health concerns, OpenAI CEO Sam Altman announced on October 14 that the company will allow verified adult users to have erotic conversations with ChatGPT starting in December. The company had loosened ChatGPT content restrictions in February but then dramatically tightened them after the August lawsuit. Altman explained that OpenAI had made ChatGPT “pretty restrictive to make sure we were being careful with mental health issues” but acknowledged this approach made the chatbot “less useful/enjoyable to many users who had no mental health problems.”

If you or someone you know is feeling suicidal or in distress, please call the Suicide Prevention Lifeline number, 1-800-273-TALK (8255), which will put you in touch with a local crisis center.