You’re correct in a technical sense but incorrect in a social sense. In 2025, “AI” in the common vernacular means LLMs. You can huff and puff about it, and about how there are plenty of non-LLM AIs out there. But you might as well complain that people mean silicon-based Turing-complete machines when they refer to a “computer,” even though technically a computer can mean many other things. You might as well be complaining about how a computer could refer to someone that does calculations by hand for a living. Or you could refer to something like Babbage’s difference engine as a computer. There are many things that can technically fall under the category of “computer.” But you know damn well what people are saying when they describe a computer. And hell, in common vernacular, a smart phone isn’t even a “computer,” even though it literally is just a computer. Words have both technical and vernacular meanings.
In 2025, in the language real speak in the real world, “AI” is a synonym for “LLM.”
That’s really the crux of this stupid argument. Is a neural network that analyzes x-rays before handing them to a doctor AI? I would say no. At this point, AI means “over hyped LLM and other generalist models.” But the person trying to judge others over AI would say yes.
It’s a failure of our education systems that people don’t know what a computer is, something they interact with every day.
While the Sapir-Whorf hypothesis might be bunk, I’m convinced that if you go up one level in language structure there is a version of it that is true. That is treating words as if they don’t need a consistent definition melts your brain. For the same reason that explaining a problem to someone else helps you solve it, doing the opposite and untethering your thoughts from self-consistant explanations stops you from explaining them even to yourself, and therefore harms your ability to think.
I wonder if this plays some part in how ChatGPT use apparently makes people dumber, that it could be not only because they become accustomed to not having to think, but because they become conditioned to accept text that is essentially void of consistent meaning.
You’re correct in a technical sense but incorrect in a social sense. In 2025, “AI” in the common vernacular means LLMs. You can huff and puff about it, and about how there are plenty of non-LLM AIs out there. But you might as well complain that people mean silicon-based Turing-complete machines when they refer to a “computer,” even though technically a computer can mean many other things. You might as well be complaining about how a computer could refer to someone that does calculations by hand for a living. Or you could refer to something like Babbage’s difference engine as a computer. There are many things that can technically fall under the category of “computer.” But you know damn well what people are saying when they describe a computer. And hell, in common vernacular, a smart phone isn’t even a “computer,” even though it literally is just a computer. Words have both technical and vernacular meanings.
In 2025, in the language real speak in the real world, “AI” is a synonym for “LLM.”
That’s really the crux of this stupid argument. Is a neural network that analyzes x-rays before handing them to a doctor AI? I would say no. At this point, AI means “over hyped LLM and other generalist models.” But the person trying to judge others over AI would say yes.
The term “AI” is already pretty fuzzy even in the technical sense, but if that’s how you’re using it then it doesn’t mean anything at all.
It’s a failure of our education systems that people don’t know what a computer is, something they interact with every day.
While the Sapir-Whorf hypothesis might be bunk, I’m convinced that if you go up one level in language structure there is a version of it that is true. That is treating words as if they don’t need a consistent definition melts your brain. For the same reason that explaining a problem to someone else helps you solve it, doing the opposite and untethering your thoughts from self-consistant explanations stops you from explaining them even to yourself, and therefore harms your ability to think.
I wonder if this plays some part in how ChatGPT use apparently makes people dumber, that it could be not only because they become accustomed to not having to think, but because they become conditioned to accept text that is essentially void of consistent meaning.
That’s a great point and you are right, most people don’t know/don’t care about the technical differences