Finding is one of most direct statements from the tech company on how AI can exacerbate mental health issues
More than a million ChatGPT users each week send messages that include “explicit indicators of potential suicidal planning or intent”, according to a blogpost published by OpenAI on Monday. The finding, part of an update on how the chatbot handles sensitive conversations, is one of the most direct statements from the artificial intelligence giant on the scale of how AI can exacerbate mental health issues.
In addition to its estimates on suicidal ideations and related interactions, OpenAI also said that about 0.07 of users active in a given week – about 560,000 of its touted 800m weekly users – show “possible signs of mental health emergencies related to psychosis or mania”. The post cautioned that these conversations were difficult to detect or measure, and that this was an initial analysis.



Holy shit. We know that ChatGPT has a propensity to facilitate suicidal ideation, and has led to suicides. It not only fails to direct suicidal individuals to the proper help, but actually advances people toward taking action.
How many people has this killed?
I am a depression survivor. Depression is a disease and it can be deadly, but there is help.
If you are having suicidal thoughts, you can get help by texting or calling 988 in North America, or text ‘SHOUT’ to 85258 in the UK.
That seems to be an unresolved lawsuit, not knowledge.
If we are to look at the influence ChatGPT has on suicide we should also be trying to evaluate how many people it allowed to voice their problems in a respectful, anonymous space with some safeguards and how many of those were potentially saved from suicide.
It’s a situation where it’s easy to look at a victim of suicide who talked about it on ChatGPT and say that spurred them on. It’s incredibly hard to look at someone who talked about suicide with ChatGPT, didn’t kill themselves and say whether it helped them or not.