German journalist Martin Bernklau typed his name and location into Microsoft’s Copilot to see how his culture blog articles would be picked up by the chatbot, according to German public broadcaster SWR.

The answers shocked Bernklau. Copilot falsely claimed Bernklau had been charged with and convicted of child abuse and exploiting dependents. It also claimed that he had been involved in a dramatic escape from a psychiatric hospital and had exploited grieving women as an unethical mortician.

Bernklau believes the false claims may stem from his decades of court reporting in Tübingen on abuse, violence, and fraud cases. The AI seems to have combined this online information and mistakenly cast the journalist as a perpetrator.

Microsoft attempted to remove the false entries but only succeeded temporarily. They reappeared after a few days, SWR reports. The company’s terms of service disclaim liability for generated responses.

  • lolcatnip@reddthat.com
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    3
    ·
    3 months ago

    “Controls” is doing a lot of work there. It seems like holding someone liable for what their pet parrot says.

    • Burninator05@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      3 months ago

      Sure but isn’t that the problem? We blame the owner when a dog with known behavior issues bites someone. Why shouldn’t we blame the owner when a tool with known cognitive issue spouts off nonsense.

      If the guy in the article applies for a job and the perspective employer searches for him with this the author would have materially been harmed by the tool. A ToS that he never agreed to shouldn’t bind him from pursuing damages.

      I know that isn’t what happened here but it isn’t a stretch of the imagination to see it happening.

      • lolcatnip@reddthat.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 months ago

        People need to quit acting like shit a computer spits out it’s true. Unlike a dog bite, false information can’t hurt anytime if nobody takes it seriously.

        What’s the alternative? Shut down all uses of generative AI because of liability issues? “Just make it tell the truth” is not a viable solution.