Finding is one of most direct statements from the tech company on how AI can exacerbate mental health issues

More than a million ChatGPT users each week send messages that include “explicit indicators of potential suicidal planning or intent”, according to a blogpost published by OpenAI on Monday. The finding, part of an update on how the chatbot handles sensitive conversations, is one of the most direct statements from the artificial intelligence giant on the scale of how AI can exacerbate mental health issues.

In addition to its estimates on suicidal ideations and related interactions, OpenAI also said that about 0.07 of users active in a given week – about 560,000 of its touted 800m weekly users – show “possible signs of mental health emergencies related to psychosis or mania”. The post cautioned that these conversations were difficult to detect or measure, and that this was an initial analysis.

  • brucethemoose@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    17 hours ago

    Preface: I love The Guardian, and fuck Altman.

    But this is a bad headline.

    Correlation is not causation. It’s disturbing that OpenAI even possesses, and has mined for these statistics, or that millions of people somehow think their ChatGPT app has any semblance of privacy, but I’m reading that millions reached out to ChatGPT with suicidal ideations.

    Not that it’s the root cause.

    The headline is that the mental health of the world sucks, not that ChatGPT created it all of the sudden. The Guardian should be ashamed of shoehorning in some “Fuck AI” article into that for clicks, when there are literally a million other malicious bits of OpenAI they could cover. This a sad story, sourced from an app that has an unprecedented (and disturbing) window into folks psyche en masse, they’ve twisted into clickbait.