Pro@programming.dev to Programming@programming.devEnglish · 2 days agoAdding a feature because ChatGPT incorrectly thinks it existswww.holovaty.comexternal-linkmessage-square30fedilinkarrow-up1214arrow-down10cross-posted to: hackernews@lemmy.bestiver.se
arrow-up1214arrow-down1external-linkAdding a feature because ChatGPT incorrectly thinks it existswww.holovaty.comPro@programming.dev to Programming@programming.devEnglish · 2 days agomessage-square30fedilinkcross-posted to: hackernews@lemmy.bestiver.se
minus-squareLovable Sidekick@lemmy.worldlinkfedilinkEnglisharrow-up12arrow-down3·edit-223 hours agoChatGPT didn’t “think” anything. It generated instructions telling users to do things incorrectly based on the human-generated content in its training data, which it didn’t understand because it doesn’t understand anything.
minus-squarewhotookkarl@lemmy.worldlinkfedilinkarrow-up1·8 hours agoSomething like this should be a warning label on AI
ChatGPT didn’t “think” anything. It generated instructions telling users to do things incorrectly based on the human-generated content in its training data, which it didn’t understand because it doesn’t understand anything.
Something like this should be a warning label on AI