You can take “justifiable” to mean whatever you feel it means in this context. e.g. Morally, artistically, environmentally, etc.
You can take “justifiable” to mean whatever you feel it means in this context. e.g. Morally, artistically, environmentally, etc.
It’s never justifiable because it can and will output incorrect information. It’s made my job worse because it means confidently incorrect people bug me when it’s wrong and I have to explain why it’s wrong.
Human beings have been outputting incorrect information for years. Get a high school textbook in literally any subject (except possibly math) from the 1970s. You’ll be amazed at how much of it is oversimplified or politicized or just plain wrong.
I do agree that AI has compounded the problem. There’s a limit to how much inaccuracy/incompetence a given system can tolerate. An organization that relies on AI for critical processes better have a way to monitor and intervene.
I mean, in my specific case, it’s a matter of the person asking an LLM to read a PDF verses them using their stupid fucking eyeballs. Just lazy shits.
That’s not really new, or unique to AI. The whole “field” of eugenics was created to give racism the mantle of scientific legitimacy. People will pick through a haystack of data to find a needle that supports (however tenuously) whatever they want to be true. LLMs are just a more convenient way to find or invent those needles.
The difference now is the machine can churn out way more data (e.g. pull requests) than a human can ever deal with.