• Ex Nummis@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      23 days ago

      First, its results are often simply wrong, so that’s no good. Second, the more people use the AI summaries, the easier it’ll be for the AI companies to subtly influence the results in their advantage. Think of advertising or propaganda.

      This is already happening btw, and the reason Musk created Grokipedia. Grok (and even other llm’s!) already use it as a “trusted source”, which it is anything but.

      • evol@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        23 days ago

        Okay but its a search engine, they can literally just pick websites that align with a certain viewpoint and hide ones that don’t, Its not really a new problem. If they just make grokpedia the first result then its not like not having the AI give you a summary changed anything.

    • IronBird@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      23 days ago

      it just makes it evermore obvious to them how many people in their life are sheep that believe anything the read online, i assume? a false sense of confidence where one mught have just said 'i dont know"

      • evol@lemmy.today
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        23 days ago

        So many people were already using tiktok or youtube as google search. I think AI is arguably better than those

        edit: New business, take your chatgpt question and turn it into a tiktok video. The Slop must go on

        • AmbitiousProcess (they/them)@piefed.social
          link
          fedilink
          English
          arrow-up
          0
          ·
          23 days ago

          The main problem is that LLMs are pulling from those sources too. An LLM often won’t distinguish between highly reputable sources and any random page that has enough relevant keywords, as it’s not actually capable of picking its own sources carefully and analyzing each one’s legitimacy, at least not without a ton of time and computing power that would make it unusable for most quick queries.

          • evol@lemmy.today
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            23 days ago

            Genuinely, do you think the average person tiktok’ing their question is getting highly reputable sources? The average American has what, a 7th grade reading level? I think the LLM might have a better idea at this point