Branding something as “AI” just tells me that there probably wasn’t anybody critically examining the output to assess that it wasn’t 100% BS. If you’re using computer technology to scan bodies for possible early cancer symptoms, for example, then you should have a professional look over the computer’s results and you shouldn’t use the marketing terms that are used for churning out brainless media content.
Branding something as “AI” just tells me that there probably wasn’t anybody critically examining the output to assess that it wasn’t 100% BS.
I think those two ideas are completely unrelated. LLMs are indeed an application of AI, and whether someone assesses its output has nothing to do with what they call the tool. I mean, people can be selectively specific in the areas that matter, but it doesn’t mean they’re wrong or obtuse across the board.
Branding something as “AI” just tells me that there probably wasn’t anybody critically examining the output to assess that it wasn’t 100% BS. If you’re using computer technology to scan bodies for possible early cancer symptoms, for example, then you should have a professional look over the computer’s results and you shouldn’t use the marketing terms that are used for churning out brainless media content.
I think those two ideas are completely unrelated. LLMs are indeed an application of AI, and whether someone assesses its output has nothing to do with what they call the tool. I mean, people can be selectively specific in the areas that matter, but it doesn’t mean they’re wrong or obtuse across the board.