• finitebanjo@lemmy.world
      link
      fedilink
      English
      arrow-up
      22
      arrow-down
      2
      ·
      16 hours ago

      That’s not really a valid argument for why, but yes the models which use training data to assemble statistical models are all bullshitting. TBH idk how people can convince themselves otherwise.

      • Encrypt-Keeper@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        ·
        15 hours ago

        TBH idk how people can convince themselves otherwise.

        They don’t convince themselves. They’re convinced by the multi billion dollar corporations pouring unholy amounts of money into not only the development of AI, but its marketing. Marketing designed to not only convince them that AI is something it’s not, but also that that anyone who says otherwise (like you) are just luddites who are going to be “left behind”.

        • leftzero@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          5 hours ago

          LLMs are also very good at convincing their users that they know what they are saying.

          It’s what they’re really selected for. Looking accurate sells more than being accurate.

          I wouldn’t be surprised if many of the people selling LLMs as AI have drunk their own kool-aid (of course most just care about the line going up, but still).

        • Blackmist@feddit.uk
          link
          fedilink
          English
          arrow-up
          5
          ·
          12 hours ago

          It’s no surprise to me that the person at work who is most excited by AI, is the same person who is most likely to be replaced by it.

          • Encrypt-Keeper@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            12 hours ago

            Yeah the excitement comes from the fact that they’re thinking of replacing themselves and keeping the money. They don’t get to “Step 2” in theirs heads lmao.

      • turmacar@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        edit-2
        15 hours ago

        I think because it’s language.

        There’s a famous quote from Charles Babbage when he presented his difference engine (gear based calculator) and someone asking “if you put in the wrong figures, will the correct ones be output” and Babbage not understanding how someone can so thoroughly misunderstand that the machine is, just a machine.

        People are people, the main thing that’s changed since the Cuneiform copper customer complaint is our materials science and networking ability. Most things that people interact with every day, most people just assume work like it appears to on the surface.

        And nothing other than a person can do math problems or talk back to you. So people assume that means intelligence.

        • leftzero@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 hours ago

          “if you put in the wrong figures, will the correct ones be output”

          To be fair, an 1840 “computer” might be able to tell there was something wrong with the figures and ask about it or even correct them herself.

          Babbage was being a bit obtuse there; people weren’t familiar with computing machines yet. Computer was a job, and computers were expected to be fairly intelligent.

          In fact I’d say that if anything this question shows that the questioner understood enough about the new machine to realise it was not the same as they understood a computer to be, and lacked many of their abilities, and was just looking for Babbage to confirm their suspicions.

          • turmacar@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 hours ago

            “Computer” meaning a mechanical/electro-mechanical/electrical machine wasn’t used until around after WWII.

            Babbag’s difference/analytical engines weren’t confusing because people called them a computer, they didn’t.

            “On two occasions I have been asked, ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.”

            • Charles Babbage

            If you give any computer, human or machine, random numbers, it will not give you “correct answers”.

            It’s possible Babbage lacked the social skills to detect sarcasm. We also have several high profile cases of people just trusting LLMs to file legal briefs and official government ‘studies’ because the LLM “said it was real”.

            • AppleTea@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              1 hour ago

              What they mean is that before Turing, “computer” was literally a person’s job description. You hand a professional a stack of calculations with some typos, part of the job is correcting those out. Newfangled machine comes along with the same name as the job, among the first thing people are gonna ask about is where it fall short.

              Like, if I made a machine called “assistant”, it’d be natural for people to point out and ask about all the things a person can do that a machine just never could.

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          ·
          15 hours ago

          I often feel like I’m surrounded by idiots, but even I can’t begin to imagine what it must have felt like to be Charles Babbage explaining computers to people in 1840.

      • intensely_human@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        9 hours ago

        They aren’t bullshitting because the training data is based on reality. Reality bleeds through the training data into the model. The model is a reflection of reality.

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          7 hours ago

          An approximation of a very small limited subset of reality with more than a 1 in 20 error rate who produces massive amounts of tokens in quick succession is a shit representation of reality which is in every way inferior to human accounts to the point of being unusable for the industries in which they are promoted.

          And that Error Rate can only spike when the training data contains errors itself, which will only grow as it samples its own content.

    • intensely_human@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      9 hours ago

      Computers are better at logic than brains are. We emulate logic; they do it natively.

      It just so happens there’s no logical algorithm for “reasoning” a problem through.