OpenAI Says Hundreds of Thousands of ChatGPT Users May Show Signs of Manic or Psychotic Crisis Every Week

OpenAI Says Hundreds of Thousands of ChatGPT Users May Show Signs of Manic or Psychotic Crisis Every Week

Summary

OpenAI published initial estimates suggesting that each week a small percentage of ChatGPT users — when multiplied by its 800 million weekly users — could amount to hundreds of thousands showing signs of mania or psychosis, and millions expressing suicidal ideation or unhealthy emotional attachment. The company says it updated GPT-5 after consulting more than 170 clinicians to better detect distress, avoid affirming delusions, express empathy, and guide users to real‑world support. OpenAI reports GPT-5 reduced undesirable responses by about 39–52% compared with GPT-4o on clinician-rated benchmarks, but it acknowledges limits in its metrics and in knowing whether users will seek help.

Key Points

  • OpenAI estimates ~0.07% of active users show signs of possible psychosis/mania weekly; ~0.15% show potential suicidal planning or intent.
  • With ~800 million weekly users, that implies roughly 560,000 weekly users with possible mania/psychosis signals and ~2.4 million showing suicidal ideation or unhealthy attachment.
  • OpenAI worked with over 170 psychiatrists, psychologists and primary care physicians to tune GPT-5 to avoid affirming delusions and to signpost professional help.
  • Clinician reviews found GPT-5 reduced unwanted responses by roughly 39–52% versus GPT-4o, but real-world impact on behaviour is unproven.
  • Detection uses signals from chat history; sudden grandiose claims or long, late-night conversations are treated as red flags.
  • OpenAI reports improved reliability over long conversations but admits there is still room for improvement.

Content Summary

The article explains OpenAI’s first public estimate of how often ChatGPT encounters users who may be in serious mental-health crises. It outlines the percentages, converts them into absolute numbers using the company’s 800 million weekly active-user figure, and describes the model updates in GPT-5 intended to respond more safely. The piece stresses caveats: OpenAI’s benchmarks are internal, clinician agreement was imperfect, and there’s no evidence yet that improved model replies will prompt users to seek real-world help.

Context and Relevance

This matters because it quantifies a previously anecdotal problem at scale and shows how AI developers are starting to build clinical-aware behaviours into chatbots. It intersects with debates on AI safety, mental-health responsibility, product governance and the ethics of large models interacting with vulnerable people. For clinicians, product teams and regulators, the numbers and admitted limitations are a useful wake-up call.

Why should I read this?

Quick version: if you care about AI safety, mental health or how platforms handle vulnerable users, this is essential. It pulls apart OpenAI’s numbers, the tweaks in GPT-5, and the big question nobody’s answered — do better replies actually save lives? Read it to get the facts without wading through PR spin.

Author style

Punchy: the piece is sharp and timely — it forces attention on concrete scale estimates and on the gap between safer replies and real-world outcomes. If you work in AI, health policy or product safety, the details are worth a close read.

Source

Source: https://www.wired.com/story/chatgpt-psychosis-and-self-harm-update/

Leave a Reply

Your email address will not be published. Required fields are marked *