Finding is one of most direct statements from the tech company on how AI can exacerbate mental health issues
More than a million ChatGPT users each week send messages that include “explicit indicators of potential suicidal planning or intent”, according to a blogpost published by OpenAI on Monday. The finding, part of an update on how the chatbot handles sensitive conversations, is one of the most direct statements from the artificial intelligence giant on the scale of how AI can exacerbate mental health issues.
In addition to its estimates on suicidal ideations and related interactions, OpenAI also said that about 0.07 of users active in a given week – about 560,000 of its touted 800m weekly users – show “possible signs of mental health emergencies related to psychosis or mania”. The post cautioned that these conversations were difficult to detect or measure, and that this was an initial analysis.



I don’t see anything in here to support saying ChatGPT is exacerbating anything.
And yet the article is basically all upvotes.
As of late, Lemmy has been feeling way too much like Reddit to me, where clickbait trends hard as long as it affirms the environment.
I’ve even pointed this out once, and had OP basically respond with “I don’t care if it’s misinformation. I agree with the sentiment.” And mods did nothing.
That’s called disinformation.
Not that information hygiene is a priority here :(
Yeah, comments often “correct” that, but that doesn’t stop the extra order of magnitude of exposure the original post gets.
As much as the Twitter format sucks, Lemmy could really use a similar “community note” blurb right below headlines.
Exactly. It’s like concluding that therapists are exacerbating suicidal ideation, psychosis, or mania just because their patients talk about those things during sessions. ChatGPT has 800 million weekly users - of course some of them are going to bring up topics like that.
It’s fine to be skeptical about the long-term effects of chatbots on mental health, but it’s just as unhealthy to be so strongly anti-anything that one throws critical thinking out the window and accept anything that merely feels like it supports what they already want believe as further evidence that it must be so.
Right? The reason people are opening up to it is that you can’t open up to a human about this.
I’m agree with you, today is more easy to open yourself to an AI that basically is a YES man, unfotunatelly that is the main problem from my point of view, how we expect every of our ideas should be accepted without any difficulty, in fact, this could mean a lack of essential hummanity.
A Teen Was Suicidal. ChatGPT Was the Friend He Confided In. | New York Times
/>dude wants to end it
/>Trys to figure out the most humane way
/>PlEaSe ReAcH oUt
/>unhelpful.jpg
I can’t wait until humans have a right to die.