OpenAI says thousands of ChatGPT users discuss suicide, form emotional reliance on chatbot

0
1
OpenAI estimates that around 0.15 per cent of ChatGPT’s weekly users discuss suicidal thoughts or plans. It is a small fraction, but significant given the platform’s massive global reach, as per a latest blog post published on Monday.


OpenAI estimates that around 0.15 per cent of ChatGPT’s weekly users discuss suicidal thoughts or plans, according to a latest blog post published on Monday. It is a small fraction, but significant given the platform’s massive global reach.

The company says the new GPT-5 model, which powers ChatGPT by default, reduces unsafe or non-compliant responses in mental-health-related chats by as much as 80 per cent, and performs substantially better when users show signs of psychosis, mania, or emotional over-reliance on the chatbot.

“ChatGPT is not a therapist”

The update comes after months of work with psychiatrists and psychologists in OpenAI’s Global Physician Network, a group of nearly 300 clinicians across 60 countries. More than 170 of them directly contributed to the new system, writing and scoring responses, defining safe behaviour, and reviewing how the model handles sensitive scenarios.

Notably, the company said that the goal is not to turn ChatGPT into a therapist, but to make sure it recognises signs of distress and gently redirects users to professional or real-world support. The model now connects people more reliably to crisis helplines and occasionally nudges users to take breaks during longer or emotionally charged sessions.

How GPT-5 responds to mental-health related queries

OpenAI’s internal testing shows. In production traffic, the GPT-5 model produced 65–80 per cent fewer unsafe responses than previous versions when users displayed signs of mental-health distress.

The Sam Altman-led company noted that structured evaluations graded by independent clinicians, GPT-5 cut undesirable replies by 39 per cent to 52 per cent compared with GPT-4o. Automated testing scored it 91–92 per cent compliant with desired behaviour, up from 77 per cent or lower in older models.

The system also handled lengthy or complex conversations more reliably, maintaining over 95 per cent consistency even in multi-turn dialogues, where earlier models often faltered.

How ChatGPT tackles emotional attachment

A newer challenge OpenAI is taking on is emotional reliance, when users form unhealthy attachments to the chatbot itself. Using a new taxonomy to identify and measure that behaviour, the company says GPT-5 now produces 80 per cent fewer problematic replies in these scenarios, often steering users toward human connection instead of validating emotional dependence.

Still, OpenAI admits these mental-health conversations are rare and hard to quantify precisely. At such low prevalence, fractions of a per cent, even small variations, can distort results. And experts do not always agree on what “safe” looks like: clinicians reviewing the model’s responses reached the same judgment only 71–77 per cent of the time.


OpenAI, ChatGPT, GPT-5, mental health AI, suicide prevention, AI safety, emotional reliance, Global Physician Network, Sam Altman, AI ethics, chatbot safety, mental health support, responsible AI, AI therapy limitations, psychosis and AI, unsafe responses reduction
#OpenAI #thousands #ChatGPT #users #discuss #suicide #form #emotional #reliance #chatbot

LEAVE A REPLY

Please enter your comment!
Please enter your name here