• CeeBee_Eh@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    17
    ·
    edit-2
    10 hours ago

    That’s why they’re calling it “AI”.

    That’s not why. They’re calling it AI because it is AI. AI doesn’t mean sapient or conscious.

    Edit: look at this diagram if you’re still unsure:

    • laz@pawb.social
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      5
      ·
      15 hours ago

      The I implies intelligence; of which there is none because it’s not sentient. It’s intentionally deceptive because it’s used as a marketing buzzword.

      • CeeBee_Eh@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        3
        ·
        10 hours ago

        You might want to look up the definition of intelligence then.

        By literal definition, a flat worm has intelligence. It just didn’t have much of it. You’re using the colloquial definition of intelligence, which uses human intelligence as a baseline.

        I’ll leave this graphic here to help you visualize what I mean:

        • Nikelui@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          6 hours ago

          Oh, yes. I forgot that LLM have creativity, abstract thinking and understanding. Thanks for the reminder. /s

          • CeeBee_Eh@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            5 hours ago

            It’s not a requirement to have all those things. Having just one is enough to meet the definition. Such as problem solving, which LLMs are capable of doing.

    • thehatfox@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      2
      ·
      13 hours ago

      In the general population it does. Most people are not using an academic definition of AI, they are using a definition formed from popular science fiction.

      • General_Effort@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        9 hours ago

        Yes, that’s the point. You’d think they could have, at least, looked into a dictionary at some point in the last 2 years. But nope, everyone else is wrong. A round of applause for the paragons of human intelligence.

      • CeeBee_Eh@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        10 hours ago

        You have that backwards. People are using the colloquial definition of AI.

        “Intelligence” is defined by a group of things like pattern recognition, ability to use tools, problem solving, etc. If one of those definitions are met then the thing in question can be said to have intelligence.

        A flat worm has intelligence, just very little of it. An object detection model has intelligence (pattern recognition) just not a lot of it. An LLM has more intelligence than a basic object detection model, but still far less than a human.

    • dissipatersshik@ttrpg.network
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      7
      ·
      edit-2
      11 hours ago

      I’m not gonna lie, most people like you are afraid to entertain the idea of AI being conscious because it makes you look at your own consciousness as not being all that special or unique.

      Do you believe in spirits, souls, or god genes?

      • CeeBee_Eh@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        edit-2
        10 hours ago

        No, it’s because it isn’t conscious. An LLM is a static model (all our AI models are in fact). For something to be conscious or sapient it would require a neural net that can morph and adapt in real-time. Nothing currently can do that. Training and inference are completely separate modes. A real AGI would have to have the training and inference steps occurring at once and continuously.

        • dissipatersshik@ttrpg.network
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          10 hours ago

          That’s fine, but I was referring to AI as a concept and not just its current iteration or implementation.

          I agree that it’s not conscious now, but someday it could be.

          • CeeBee_Eh@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            7 hours ago

            That’s the same as arguing “life” is conscious, even though most life isn’t conscious or sapient.

            Some day there could be AI that’s conscious, and when it happens we will call that AI conscious. That still doesn’t make all other AI conscious.

            It’s such a weirdly binary viewpoint.