• Trincapinones@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    39
    arrow-down
    13
    ·
    2 days ago

    “AI” is not just LLMs or diffusion models, and that’s what I think OPs is about, like, do you also hate Stockfish? Or enemies in a videogame?

    • WoodScientist@lemmy.world
      link
      fedilink
      arrow-up
      24
      arrow-down
      3
      ·
      edit-2
      1 day ago

      You’re correct in a technical sense but incorrect in a social sense. In 2025, “AI” in the common vernacular means LLMs. You can huff and puff about it, and about how there are plenty of non-LLM AIs out there. But you might as well complain that people mean silicon-based Turing-complete machines when they refer to a “computer,” even though technically a computer can mean many other things. You might as well be complaining about how a computer could refer to someone that does calculations by hand for a living. Or you could refer to something like Babbage’s difference engine as a computer. There are many things that can technically fall under the category of “computer.” But you know damn well what people are saying when they describe a computer. And hell, in common vernacular, a smart phone isn’t even a “computer,” even though it literally is just a computer. Words have both technical and vernacular meanings.

      In 2025, in the language real speak in the real world, “AI” is a synonym for “LLM.”

      • Liz@midwest.social
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 day ago

        That’s really the crux of this stupid argument. Is a neural network that analyzes x-rays before handing them to a doctor AI? I would say no. At this point, AI means “over hyped LLM and other generalist models.” But the person trying to judge others over AI would say yes.

        • drosophila@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 day ago

          Is a neural network that analyzes x-rays before handing them to a doctor AI? I would say no.

          The term “AI” is already pretty fuzzy even in the technical sense, but if that’s how you’re using it then it doesn’t mean anything at all.

      • drosophila@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 day ago

        It’s a failure of our education systems that people don’t know what a computer is, something they interact with every day.

        While the Sapir-Whorf hypothesis might be bunk, I’m convinced that if you go up one level in language structure there is a version of it that is true. That is treating words as if they don’t need a consistent definition melts your brain. For the same reason that explaining a problem to someone else helps you solve it, doing the opposite and untethering your thoughts from self-consistant explanations stops you from explaining them even to yourself, and therefore harms your ability to think.

        I wonder if this plays some part in how ChatGPT use apparently makes people dumber, that it could be not only because they become accustomed to not having to think, but because they become conditioned to accept text that is essentially void of consistent meaning.

    • PeriodicallyPedantic@lemmy.ca
      link
      fedilink
      arrow-up
      8
      arrow-down
      2
      ·
      1 day ago

      How often do you think that this confusing actually results in people acting as described in the tweet?

      Context matters, and the people who are the audience for tweets about stockfish are aware of the nuance. Outside of niche communities, “AI” without additional explicit context means LLMs for the vast vast majority of the time.

      If this isn’t a strawman, then it’s at least a misleading argument.

      • 8uurg@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        1 day ago

        Saying AI = LLMs is an severe oversimplification though. LLMs and image generators are subsets of AI that are currently most prominent and with which is most commonly knowingly being interacted with, but pretty much every formal definition is wider than that. Recommendation algorithms, as used on YouTube or social media, the smart (photo) search, are further examples of AI that people interact with. And fraud detection, learning spam filters, abnormality (failure) detection, traffic estimation are even more examples. All of these things are formally defined as AI and are very much commonplace, I would not call them niche.

        The fact that LLMs and image generators are currently the most prominent examples does not necessarily exclude other examples from being part of the group too.

        Using AI as a catch all phrase is simply a case of overgeneralization, in part due to the need of brevity. For some cases the difference does not matter, or is even beneficial. For example, ‘don’t train AI models on my art’ would only marginally affect applications other than image generation and image analysis, and covers any potential future applications that may pop up.

        However, statements ‘ban AI’ could be easily misconstrued, and may be interpreted in a much wider manner than what the original author may have intended. There will be people with a variety of definitions to what does or does not constitute AI, which will lead to miscommunication unless it is clear from context.

        It probably wouldn’t hurt clarifying things specifically and talking about the impact of a specific application, rather than discussing what is (or is not) to be classified as AI.

        • PeriodicallyPedantic@lemmy.ca
          link
          fedilink
          arrow-up
          1
          ·
          21 hours ago

          It’s like you saw my response, and processed exactly none of it before you replied.

          Did I say this is how it should be? No. I was describing the way it actually is. It’s not me who is oversimplify, this is just the way it is used in pop culture. It doesn’t matter at all how much you don’t like that, because we cannot be prescriptive about actually irl usage of a word.

          Am I personally aware of the difference? Yes. I work with LLMs every day as part of my job, both as a tool and as a product.

          None of this, or what you wrote, changes that in common discourse, outside of niche communities, “AI” is synonymous with “LLM” and GPT content image generators, almost exclusively, unless other context is provided.

          So when people see “AI” in common discourse, they’re almost always right to assume it means LLMs and GPT content generators.

    • kn33@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      3
      ·
      2 days ago

      Also, some things are called AI that aren’t. People are freaking out as soon as the term is mentioned without checking if it’s actually some sort of model or if it’s just a basic algorithm with a buzzword tossed on.

    • MotoAsh@lemmy.world
      link
      fedilink
      arrow-up
      6
      arrow-down
      14
      ·
      2 days ago

      “AI” in videogames is basically never powered by large models like LLMs or Stable Diffusion or others. The fact you compare them only demonstrates how fucking little you actually know about this topic you are BLINDLY defending.

      • ddh@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 hours ago

        Not sure why you’re getting downvoted, most video game enemies do not learn. They can have some clever algorithms, but they don’t know anything about how you’ve responded in the past and which of their tactics work better against you. Have they been trained on player interaction at all? I would love to see some learning NPCs in games and would be happy to dial down graphics settings to power them instead.