Saying AI = LLMs is an severe oversimplification though. LLMs and image generators are subsets of AI that are currently most prominent and with which is most commonly knowingly being interacted with, but pretty much every formal definition is wider than that. Recommendation algorithms, as used on YouTube or social media, the smart (photo) search, are further examples of AI that people interact with. And fraud detection, learning spam filters, abnormality (failure) detection, traffic estimation are even more examples. All of these things are formally defined as AI and are very much commonplace, I would not call them niche.
The fact that LLMs and image generators are currently the most prominent examples does not necessarily exclude other examples from being part of the group too.
Using AI as a catch all phrase is simply a case of overgeneralization, in part due to the need of brevity. For some cases the difference does not matter, or is even beneficial. For example, ‘don’t train AI models on my art’ would only marginally affect applications other than image generation and image analysis, and covers any potential future applications that may pop up.
However, statements ‘ban AI’ could be easily misconstrued, and may be interpreted in a much wider manner than what the original author may have intended. There will be people with a variety of definitions to what does or does not constitute AI, which will lead to miscommunication unless it is clear from context.
It probably wouldn’t hurt clarifying things specifically and talking about the impact of a specific application, rather than discussing what is (or is not) to be classified as AI.
It’s like you saw my response, and processed exactly none of it before you replied.
Did I say this is how it should be? No. I was describing the way it actually is. It’s not me who is oversimplify, this is just the way it is used in pop culture. It doesn’t matter at all how much you don’t like that, because we cannot be prescriptive about actually irl usage of a word.
Am I personally aware of the difference? Yes. I work with LLMs every day as part of my job, both as a tool and as a product.
None of this, or what you wrote, changes that in common discourse, outside of niche communities, “AI” is synonymous with “LLM” and GPT content image generators, almost exclusively, unless other context is provided.
So when people see “AI” in common discourse, they’re almost always right to assume it means LLMs and GPT content generators.
Saying AI = LLMs is an severe oversimplification though. LLMs and image generators are subsets of AI that are currently most prominent and with which is most commonly knowingly being interacted with, but pretty much every formal definition is wider than that. Recommendation algorithms, as used on YouTube or social media, the smart (photo) search, are further examples of AI that people interact with. And fraud detection, learning spam filters, abnormality (failure) detection, traffic estimation are even more examples. All of these things are formally defined as AI and are very much commonplace, I would not call them niche.
The fact that LLMs and image generators are currently the most prominent examples does not necessarily exclude other examples from being part of the group too.
Using AI as a catch all phrase is simply a case of overgeneralization, in part due to the need of brevity. For some cases the difference does not matter, or is even beneficial. For example, ‘don’t train AI models on my art’ would only marginally affect applications other than image generation and image analysis, and covers any potential future applications that may pop up.
However, statements ‘ban AI’ could be easily misconstrued, and may be interpreted in a much wider manner than what the original author may have intended. There will be people with a variety of definitions to what does or does not constitute AI, which will lead to miscommunication unless it is clear from context.
It probably wouldn’t hurt clarifying things specifically and talking about the impact of a specific application, rather than discussing what is (or is not) to be classified as AI.
It’s like you saw my response, and processed exactly none of it before you replied.
Did I say this is how it should be? No. I was describing the way it actually is. It’s not me who is oversimplify, this is just the way it is used in pop culture. It doesn’t matter at all how much you don’t like that, because we cannot be prescriptive about actually irl usage of a word.
Am I personally aware of the difference? Yes. I work with LLMs every day as part of my job, both as a tool and as a product.
None of this, or what you wrote, changes that in common discourse, outside of niche communities, “AI” is synonymous with “LLM” and GPT content image generators, almost exclusively, unless other context is provided.
So when people see “AI” in common discourse, they’re almost always right to assume it means LLMs and GPT content generators.