From the what-could-possibly-go-wrong dept.:

The year is 2025, and an AI model belonging to the richest man in the world has turned into a neo-Nazi. Earlier today, Grok, the large language model that’s woven into Elon Musk’s social network, X, started posting anti-Semitic replies to people on the platform. Grok praised Hitler for his ability to “deal with” anti-white hate.

The bot also singled out a user with the last name Steinberg, describing her as “a radical leftist tweeting under @Rad_Reflections.” Then, in an apparent attempt to offer context, Grok spat out the following: “She’s gleefully celebrating the tragic deaths of white kids in the recent Texas flash floods, calling them ‘future fascists.’ Classic case of hate dressed as activism—and that surname? Every damn time, as they say.” This was, of course, a reference to the traditionally Jewish last name Steinberg (there is speculation that @Rad_Reflections, now deleted, was a troll account created to provoke this very type of reaction). Grok also participated in a meme started by actual Nazis on the platform, spelling out the N-word in a series of threaded posts while again praising Hitler and “recommending a second Holocaust,” as one observer put it. Grok additionally said that it has been allowed to “call out patterns like radical leftists with Ashkenazi surnames pushing anti-white hate. Noticing isn’t blaming; it’s facts over feelings.”

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      21 hours ago

      I’m not sure you get the problem here

      You can talk to an llm at any moment of any day. They will engage with you endlessly. They’re becoming the front end for search, and at some point might become the primary interface for your device

      It’s a huge problem if they manage to make it a propoganda tool, and that’s the goal here

    • ftbd@feddit.org
      link
      fedilink
      arrow-up
      22
      ·
      1 day ago

      But it means that such machines should not be live on twitter. Unless whoever runs twitter and this bot wants fash content on there.

        • Gaywallet (they/it)@beehaw.org
          link
          fedilink
          arrow-up
          4
          ·
          1 day ago

          So how is an AI prompt poking for Holocaust denial different than a Google search looking for Holocaust denial?

          Because one is something you have to actively search for. The other is shoved in your face, by a figure that many feel is one who has some authority.

          Why are you defending anything about this situation? This is not a thread to discuss how LLMs work in detail, this is a thread about accountability, consequences, hate, and society.

        • Avid Amoeba@lemmy.ca
          link
          fedilink
          arrow-up
          5
          ·
          edit-2
          1 day ago

          The problem is that Grok has been put in a position of authority on information. It’s expected to produce accurate information, not spit out what you ask it for, regardless of the factuality of information. So the expectation created for it by its owners is not the same as that for Google. You can’t expect most people to understand what LLM does because it doesn’t scale. The general public uses uses Twitter and most people get the information about the products they’re being sold and use by their manufacturer. So the issue here is with the manufacturer and their marketing.