This happens sometimes in my own writing when I’m revising quickly. Of course I don’t have an editor and I’m just commenting on Lemmy and Reddit, but same idea. First I write it using one phrase, then I revise it to another phrase while leaving behind a trace of the removed phrase. The final sentence might check out grammatically but not logically, so a simple grammar checker could miss it.
Yes, it does that because it was designed to sound convincing, and that is a good method for accomplishing that. That is the primary goal behind the design of all chatbots, and what the Turing Test was intended to gauge. Anyone who makes a chatbot wants it to sound good first and foremost.
Should be …relations between the two allies have grown sour amid…, or …tensions between the two allies have grown amid….
“tensions” growing sour makes no sense though. Are there sweet international tensions?
Can really see the deteriorating quality of journalism in the writing used. They’re being paid for this. Supposedly anyway.
This happens sometimes in my own writing when I’m revising quickly. Of course I don’t have an editor and I’m just commenting on Lemmy and Reddit, but same idea. First I write it using one phrase, then I revise it to another phrase while leaving behind a trace of the removed phrase. The final sentence might check out grammatically but not logically, so a simple grammar checker could miss it.
Could just be a way of saying, like, tensions have grown worse. When things sour, they get worse. Unless it’s sour cream. Or citrus. Or pickles.
I dunno I’m not a linguist.
Or it’s ChatGPT
Fuck you’re probably right
Honestly, I think ChatGPT wouldn’t make that particular mistake. Sounding proper is its primary purpose. Maybe a cheap knockoff.
chatGPT just guesses the next word. stop anthropomorphizing it.
Yes, it does that because it was designed to sound convincing, and that is a good method for accomplishing that. That is the primary goal behind the design of all chatbots, and what the Turing Test was intended to gauge. Anyone who makes a chatbot wants it to sound good first and foremost.
it guesses the next word… based on examples created by humans. It’s not just making shit up out of thin air.
Humans are just electrified meat. Stop anthropomorphizing it.
Found Andrew Ure’s account
🙄
Another example of why I hate techies
Lol making a mistake isn’t unique to humans. Machines make mistakes.
Congratulations for knowing that a LLM isn’t the same as a human though, I guess!
I knew someone would say that.
TalkFOS