Consider the implications if ChatGPT started saying “I don’t know” to even 30% of queries – a conservative estimate based on the paper’s analysis of factual uncertainty in training data. Users accustomed to receiving confident answers to virtually any question would likely abandon such systems rapidly.
I think we would just be more careful with how we used the technology. E.g. don’t autocomplete code if the threshold is not met for reasonable certainty.
I would argue that it’s more useful having a system that says it doesn’t know half the time than a system that’s confidently wrong half the time
Obviously. But more useful ≠ more money. So the fascocapitalists will ofc not implement that.
Depends on the product. From an original AI research point of view this is what you want, a model that can realize it is missing information and deviates from giving a result. But once profit became involved, marketing requires a fully confident output to get everyone to buy in. So we get what we get and not something that’s more reliable.
It’s not just that, it’s also the fact they scored the responses based on user feedback, and users tend to give better feedback for more confident, even if wrong, responses.
Don’t trust AI, ask your cat instead, cats know everything :3
Sure, but cats also refuse to answer way more than 30% of the time!
Remember a year ago when llms started getting good then they had to be reprogrammed to only answer the way the fascists want? They intentionally retarded ai to protect fascist interests. Because fascism is anti intellectualism
I don’t follow.
They lost me on LLMs getting good.
This is… Well, not entirely convincing.
So, say the computational cost triples. Intelligent methods to mitigate this would include purpose built hardware to optimize these processes. That’s a big lift, but the reward would be calculable and would have significant enough ROI that there’s no way they won’t pursue it. I think it’s a realistically conquerable problem.
And so what if it doesn’t know? Existing solutions will scour the Internet on command and this functionality, given a sufficiently high level of uncertainty, could be automated.
Combining the Internet access capability with a certainty calculation and assuming there is hardware optimization in the future, these problems, while truly significant, seem solvable.
That said, the solution probably will most likely make our world uninhabitable, so that’s neat.
My concern on top of this is that they will not exhaust funding even if private investment goes dry. The state (US, China) won’t stop funding till they reach total dominance.
We’re so screwed, guys.