Fucking obviously. Until Data’s positronic brains becomes reality, AI is not actual intelligence.
It’s an expensive carbon spewing parrot.
I think it’s important to note (i’m not an llm I know that phrase triggers you to assume I am) that they haven’t proven this as an inherent architectural issue, which I think would be the next step to the assertion.
do we know that they don’t and are incapable of reasoning, or do we just know that for x problems they jump to memorized solutions, is it possible to create an arrangement of weights that can genuinely reason, even if the current models don’t? That’s the big question that needs answered. It’s still possible that we just haven’t properly incentivized reason over memorization during training.
if someone can objectively answer “no” to that, the bubble collapses.
No shit. This isn’t new.
Most humans don’t reason. They just parrot shit too. The design is very human.
Thata why ceo love them. When your job is 90% spewing bs a machine that does that is impressive
LLMs deal with tokens. Essentially, predicting a series of bytes.
Humans do much, much, much, much, much, much, much more than that.
Yeah I’ve always said the the flaw in Turing’s Imitation Game concept is that if an AI was indistinguishable from a human it wouldn’t prove it’s intelligent. Because humans are dumb as shit. Dumb enough to force one of the smartest people in the world take a ton of drugs which eventually killed him simply because he was gay.
I think that person had to choose between the drugs or hard core prison of the 1950s England where being a bit odd was enough to guarantee an incredibly difficult time as they say in England, I would’ve chosen the drugs as well hoping they would fix me, too bad without testosterone you’re going to be suicidal and depressed, I’d rather choose to keep my hair than to be horny all the time
I’ve heard something along the lines of, “it’s not when computers can pass the Turing Test, it’s when they start failing it on purpose that’s the real problem.”
Yeah we’re so stupid we’ve figured out advanced maths, physics, built incredible skyscrapers and the LHC, we may as individuals be less or more intelligent but humans as a whole are incredibly intelligent
No way!
Statistical Language models don’t reason?
But OpenAI, robots taking over!
Thank you Captain Obvious! Only those who think LLMs are like “little people in the computer” didn’t knew this already.
Yeah, well there are a ton of people literally falling into psychosis, led by LLMs. So it’s unfortunately not that many people that already knew it.
Dude they made chat gpt a little more boit licky and now many people are convinced they are literal messiahs. All it took for them was a chat bot and a few hours of talk.
Of course, that is obvious to all having basic knowledge of neural networks, no?
I still remember Geoff Hinton’s criticisms of backpropagation.
IMO it is still remarkable what NNs managed to achieve: some form of emergent intelligence.
No shit
You assume humans do the opposite? We literally institutionalize humans who not follow set patterns.
Maybe you failed all your high school classes, but that ain’t got none to do with me.
Funny how triggering it is for some people when anyone acknowledges humans are just evolved primates doing the same pattern matching.
It’s not that institutionalized people don’t follow “set” pattern matches. That’s why you’re getting downvotes.
Some of those humans can operate with the same brain rules alright. They may even be more efficient at it than you and I may. The higher level functions is a different thing.
That’s absolutely what it is. It’s a pattern on here. Any acknowledgment of humans being animals or less than superior gets hit with pushback.
Humans are animals. But an LLM is not an animal and has no reasoning abilities.
It’s built by animals, and it reflects them. That’s impressive on its own. Doesn’t need to be exaggerated.
I appreciate your telling the truth. No downvotes from me. See you at the loony bin, amigo.
We also reward people who can memorize and regurgitate even if they don’t understand what they are doing.
Some of them, sometimes. But some are adulated and free and contribute vast swathes to our culture and understanding.
this is so Apple, claiming to invent or discover something “first” 3 years later than the rest of the market
Trust Apple. Everyone else who were in the space first are lying.
lol is this news? I mean we call it AI, but it’s just LLM and variants it doesn’t think.
Proving it matters. Science is constantly proving any other thing that people believe is obvious because people have an uncanning ability to believe things that are false. Some people will believe things long after science has proven them false.
The “Apple” part. CEOs only care what companies say.
Apple is significantly behind and arrived late to the whole AI hype, so of course it’s in their absolute best interest to keep showing how LLMs aren’t special or amazingly revolutionary.
They’re not wrong, but the motivation is also pretty clear.
Apple always arrives late to any new tech, doesn’t mean they haven’t been working on it behind the scenes for just as long though…
“Late to the hype” is actually a good thing. Gen AI is a scam wrapped in idiocy wrapped in a joke. That Apple is slow to ape the idiocy of microsoft is just fine.
They need to convince investors that this delay wasn’t due to incompetence. The problem will only be somewhat effective as long as there isn’t an innovation that makes AI more effective.
If that happens, Apple shareholders will, at best, ask the company to increase investment in that area or, at worst, to restructure the company, which could also mean a change in CEO.
Maybe they are so far behind because they jumped on the same train but then failed at achieving what they wanted based on the claims. And then they started digging around.
Yes, Apple haters can’t admit nor understand it but Apple doesn’t do pseudo-tech.
They may do silly things, they may love their 100% mark up but it’s all real technology.
The AI pushers or today are akin to the pushers of paranormal phenomenon from a century ago. These pushers want us to believe, need us to believe it so they can get us addicted and extract value from our very existence.
"It’s part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, ‘that’s not thinking’." -Pamela McCorduck´.
It’s called the AI Effect.As Larry Tesler puts it, “AI is whatever hasn’t been done yet.”.
Yesterday I asked an LLM “how much energy is stored in a grand piano?” It responded with saying there is no energy stored in a grad piano because it doesn’t have a battery.
Any reasoning human would have understood that question to be referring to the tension in the strings.
Another example is asking “does lime cause kidney stones?”. It didn’t assume I mean lime the mineral and went with lime the citrus fruit instead.
Once again a reasoning human would assume the question is about the mineral.
Ask these questions again in a slightly different way and you might get a correct answer, but it won’t be because the LLM was thinking.
I’m not sure how you arrived at lime the mineral being a more likely question than lime the fruit. I’d expect someone asking about kidney stones would also be asking about foods that are commonly consumed.
This kind of just goes to show there’s multiple ways something can be interpreted. Maybe a smart human would ask for clarification, but for sure AIs today will just happily spit out the first answer that comes up. LLMs are extremely “good” at making up answers to leading questions, even if it’s completely false.
Honestly, i thought about the chemical energy in the materials constructing the piano and what energy burning it would release.
The tension of the strings would actually be a pretty miniscule amount of energy too, since there’s very little stretch to a piano wire, the force might be high, but the potential energy/work done to tension the wire is low (done by hand with a wrench).
Compared to burning a piece of wood, which would release orders of magnitude more energy.
But 90% of “reasoning humans” would answer just the same. Your questions are based on some non-trivial knowledge of physics, chemistry and medicine that most people do not possess.
That entire paragraph is much better at supporting the precise opposite argument. Computers can beat Kasparov at chess, but they’re clearly not thinking when making a move - even if we use the most open biological definitions for thinking.
By that metric, you can argue Kasparov isn’t thinking during chess, either. A lot of human chess “thinking” is recalling memorized openings, evaluating positions many moves deep, and other tasks that map to what a chess engine does. Of course Kasparov is thinking, but then you have to conclude that the AI is thinking too. Thinking isn’t a magic process, nor is it tightly coupled to human-like brain processes as we like to think.
No, it shows how certain people misunderstand the meaning of the word.
You have called npcs in video games “AI” for a decade, yet you were never implying they were somehow intelligent. The whole argument is strangely inconsistent.
Intellegence has a very clear definition.
It’s requires the ability to acquire knowledge, understand knowledge and use knowledge.
No one has been able to create an system that can understand knowledge, therefor me none of it is artificial intelligence. Each generation is merely more and more complex knowledge models. Useful in many ways but never intelligent.
Strangely inconsistent + smoke & mirrors = profit!
deleted by creator
Who is “you”?
Just because some dummies supposedly think that NPCs are “AI”, that doesn’t make it so. I don’t consider checkers to be a litmus test for “intelligence”.
“You” applies to anyone that doesnt understand what AI means. It’s a portmanteau word for a lot of things.
Npcs ARE AI. AI doesnt mean “human level intelligence” and never did. Read the wiki if you need help understanding.
I’m going to write a program to play tic-tac-toe. If y’all don’t think it’s “AI”, then you’re just haters. Nothing will ever be good enough for y’all. You want scientific evidence of intelligence?!?! I can’t even define intelligence so take that! \s
Seriously tho. This person is arguing that a checkers program is “AI”. It kinda demonstrates the loooong history of this grift.
It is. And has always been. “Artificial Intelligence” doesn’t mean a feeling thinking robot person (that would fall under AGI or artificial conciousness), it’s a vast field of research in computer science with many, many things under it.
ITT: people who obviously did not study computer science or AI at at least an undergraduate level.
Y’all are too patient. I can’t be bothered to spend the time to give people free lessons.
The computer science industry isn’t the authority on artificial intelligence it thinks it is. The industry is driven by a level of hubris that causes people to step beyond the bounds of science and into the realm of humanities without acknowledgment.
Wow, I would deeply apologise on the behalf of all of us uneducated proles having opinions on stuff that we’re bombarded with daily through the media.
Yeah that’s exactly what I took from the above comment as well.
I have a pretty simple bar: until we’re debating the ethics of turning it off or otherwise giving it rights, it isn’t intelligent. No it’s not scientific, but it’s a hell of a lot more consistent than what all the AI evangelists espouse. And frankly if we’re talking about the ethics of how to treat something we consider intelligent, we have to go beyond pure scientific benchmarks anyway. It becomes a philosophy/ethics discussion.
Like crypto it has become a pseudo religion. Challenges to dogma and orthodoxy are shouted down, the non-believers are not welcome to critique it.
This is why I say these articles are so similar to how right wing media covers issues about immigrants.
There’s some weird media push to convince the left to hate AI. Think of all the headlines for these issues. There are so many similarities. They’re taking jobs. They are a threat to our way of life. The headlines talk about how they will sexual assault your wife, your children, you. Threats to the environment. There’s articles like this where they take something known as twist it to make it sound nefarious to keep the story alive and avoid decay of interest.
Then when they pass laws, we’re all primed to accept them removing whatever it is that advantageous them and disadvantageous us.
This is why I say these articles are so similar to how right wing media covers issues about immigrants.
Maybe the actual problem is people who equate computer programs with people.
Then when they pass laws, we’re all primed to accept them removing whatever it is that advantageous them and disadvantageous us.
You mean laws like this? jfc.
Literally what I’m talking about. They have been pushing anti AI propaganda to alienate the left from embracing it while the right embraces it. You have such a blind spot you this, you can’t even see you’re making my argument for me.
That depends on your assumption that the left would have anything relevant to gain by embracing AI (whatever that’s actually supposed to mean).
What isn’t there to gain?
Its power lies in ingesting language and producing infinite variations. We can feed it talking points, ask it to refine our ideas, test their logic, and even request counterarguments to pressure-test our stance. It helps us build stronger, more resilient narratives.
We can use it to make memes. Generate images. Expose logical fallacies. Link to credible research. It can detect misinformation in real-time and act as a force multiplier for anyone trying to raise awareness or push back on disinfo.
Most importantly, it gives a voice to people with strong ideas who might not have the skills or confidence to share them. Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.
Sure, it has flaws. But rejecting it outright while the right embraces it? That’s beyond shortsighted it’s self-sabotage. And unfortunately, after the last decade, that kind of misstep is par for the course.
I have no idea what sort of AI you’ve used that could do any of this stuff you’ve listed. A program that doesn’t reason won’t expose logical fallacies with any rigour or refine anyone’s ideas. It will link to credible research that you could already find on Google but will also add some hallucinations to the summary. And so on, it’s completely divorced from how the stuff as it is currently works.
Someone with a brilliant comic concept but no drawing ability? AI can help build a framework to bring it to life.
That’s a misguided view of how art is created. Supposed “brilliant ideas” are dime a dozen, it takes brilliant writers and artists to make them real. Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept. If you are not competent in a visual medium, then don’t make it visual, write a story or an essay.
Besides, most of the popular and widely shared webcomics out there are visually extremely simple or just bad (look at SMBC or xkcd or - for a right-wing example - Stonetoss).
For now I see no particular benefits that the right-wing has obtained by using AI either. They either make it feed back into their delusions, or they whine about the evil leftists censoring the models (by e.g. blocking its usage of slurs).
Here is chatgpt doing what you said it can’t. Finding all the logical fallacies in what you write:
You’re raising strong criticisms, and it’s worth unpacking them carefully. Let’s go through your argument and see if there are any logical fallacies or flawed reasoning.
- Straw Man Fallacy
“Someone with no understanding of how good art works just having an image generator produce the images will result in a boring comic no matter the initial concept.”
This misrepresents the original claim:
“AI can help create a framework at the very least so they can get their ideas down.”
The original point wasn’t that AI could replace the entire creative process or make a comic successful on its own—it was that it can assist people in starting or visualizing something they couldn’t otherwise. Dismissing that by shifting the goalposts to “producing a full, good comic” creates a straw man of the original claim.
- False Dichotomy
“If you are not competent in a visual medium, then don’t make it visual, write a story or an essay.”
This suggests a binary: either you’re competent at visual art or you shouldn’t try to make anything visual. That’s a false dichotomy. People can learn, iterate, or collaborate, and tools like AI can help bridge gaps in skill—not replace skill, but allow exploration. Many creators use tools before mastery (e.g., musicians using GarageBand, or writers using Grammarly).
- Hasty Generalization
“Supposed ‘brilliant ideas’ are a dime a dozen…”
While it’s true that execution matters more than ideas alone, dismissing the value of ideas altogether is an overgeneralization. Many successful works do start with a strong concept—and while many fail in execution, tools that lower the barrier to prototyping or drafting can help surface more workable ideas. The presence of many bad ideas doesn’t invalidate the potential value of enabling more people to test theirs.
- Appeal to Ridicule / Ad Hominem (Light)
“…result in a boring comic…” / “…just bad (look at SMBC or xkcd or…)”
Criticizing popular webcomics like SMBC or xkcd by calling them “bad” doesn’t really support your broader claim. These comics are widely read because of strong writing and insight, despite minimalistic visuals. It comes off as dismissive and ridicules the counterexamples rather than engaging with them. That’s not a logical fallacy in the strictest sense, but it’s rhetorically weak.
- Tu Quoque / Whataboutism (Borderline)
“For now I see no particular benefits that the right-wing has obtained by using AI either…”
This seems like a rebuttal to a point that wasn’t made directly. The original argument wasn’t that “the right is winning with AI,” but rather that alienating the left from it could lead to missed opportunities. Refuting a weaker version (e.g., “the right is clearly winning with AI”) isn’t addressing the original concern, which was more about strategic adoption.
Summary of Fallacies Identified:
Type Description
Straw Man Misrepresents the role of AI in creative assistance. False Dichotomy Assumes one must either be visually skilled or not attempt visual media. Hasty Generalization Devalues “brilliant ideas” universally. Appeal to Ridicule Dismisses counterexamples via mocking tone rather than analysis. Tu Quoque-like Compares left vs. right AI use without addressing the core point about opportunity.
Your criticism is thoughtful and not without merit—but it’s wrapped in rhetoric that sometimes slips into oversimplification or misrepresentation of the opposing view. If your goal is to strengthen your argument or have a productive back-and-forth, refining those areas could help. Would you like to rewrite it in a way that keeps the spirit of your critique but sharpens its logic?
At this point you’re just arguing for arguments sake. You’re not wrong or right but instead muddying things. Saying it’ll be boring comics missed the entire point. Saying it is the same as google is pure ignorance of what it can do. But this goes to my point about how this stuff is all similar to anti immigrant mentality. The people who buy into it will get into these type of ignorant and short sighted statements just to prove things that just are not true. But they’ve bought into the hype and need to justify it.
deleted by creator
Because it’s a fear-mongering angle that still sells. AI has been a vehicle for scifi for so long that trying to convince Boomers that of won’t kill us all is the hard part.
I’m a moderate user for code and skeptic of LLM abilities, but 5 years from now when we are leveraging ML models for groundbreaking science and haven’t been nuked by SkyNet, all of this will look quaint and silly.
5 years from now? Or was it supposed to be 5 years ago?
Pretty sure we already have skynet.
Yah of course they do they’re computers
Computers are better at logic than brains are. We emulate logic; they do it natively.
It just so happens there’s no logical algorithm for “reasoning” a problem through.
That’s not really a valid argument for why, but yes the models which use training data to assemble statistical models are all bullshitting. TBH idk how people can convince themselves otherwise.
TBH idk how people can convince themselves otherwise.
They don’t convince themselves. They’re convinced by the multi billion dollar corporations pouring unholy amounts of money into not only the development of AI, but its marketing. Marketing designed to not only convince them that AI is something it’s not, but also that that anyone who says otherwise (like you) are just luddites who are going to be “left behind”.
LLMs are also very good at convincing their users that they know what they are saying.
It’s what they’re really selected for. Looking accurate sells more than being accurate.
I wouldn’t be surprised if many of the people selling LLMs as AI have drunk their own kool-aid (of course most just care about the line going up, but still).
It’s no surprise to me that the person at work who is most excited by AI, is the same person who is most likely to be replaced by it.
Yeah the excitement comes from the fact that they’re thinking of replacing themselves and keeping the money. They don’t get to “Step 2” in theirs heads lmao.
I think because it’s language.
There’s a famous quote from Charles Babbage when he presented his difference engine (gear based calculator) and someone asking “if you put in the wrong figures, will the correct ones be output” and Babbage not understanding how someone can so thoroughly misunderstand that the machine is, just a machine.
People are people, the main thing that’s changed since the Cuneiform copper customer complaint is our materials science and networking ability. Most things that people interact with every day, most people just assume work like it appears to on the surface.
And nothing other than a person can do math problems or talk back to you. So people assume that means intelligence.
“if you put in the wrong figures, will the correct ones be output”
To be fair, an 1840 “computer” might be able to tell there was something wrong with the figures and ask about it or even correct them herself.
Babbage was being a bit obtuse there; people weren’t familiar with computing machines yet. Computer was a job, and computers were expected to be fairly intelligent.
In fact I’d say that if anything this question shows that the questioner understood enough about the new machine to realise it was not the same as they understood a computer to be, and lacked many of their abilities, and was just looking for Babbage to confirm their suspicions.
“Computer” meaning a mechanical/electro-mechanical/electrical machine wasn’t used until around after WWII.
Babbag’s difference/analytical engines weren’t confusing because people called them a computer, they didn’t.
“On two occasions I have been asked, ‘Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?’ I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.”
- Charles Babbage
If you give any computer, human or machine, random numbers, it will not give you “correct answers”.
It’s possible Babbage lacked the social skills to detect sarcasm. We also have several high profile cases of people just trusting LLMs to file legal briefs and official government ‘studies’ because the LLM “said it was real”.
I often feel like I’m surrounded by idiots, but even I can’t begin to imagine what it must have felt like to be Charles Babbage explaining computers to people in 1840.
They aren’t bullshitting because the training data is based on reality. Reality bleeds through the training data into the model. The model is a reflection of reality.
An approximation of a very small limited subset of reality with more than a 1 in 20 error rate who produces massive amounts of tokens in quick succession is a shit representation of reality which is in every way inferior to human accounts to the point of being unusable for the industries in which they are promoted.
And that Error Rate can only spike when the training data contains errors itself, which will only grow as it samples its own content.
Just like me
python code for reversing the linked list.
Fair, but the same is true of me. I don’t actually “reason”; I just have a set of algorithms memorized by which I propose a pattern that seems like it might match the situation, then a different pattern by which I break the situation down into smaller components and then apply patterns to those components. I keep the process up for a while. If I find a “nasty logic error” pattern match at some point in the process, I “know” I’ve found a “flaw in the argument” or “bug in the design”.
But there’s no from-first-principles method by which I developed all these patterns; it’s just things that have survived the test of time when other patterns have failed me.
I don’t think people are underestimating the power of LLMs to think; I just think people are overestimating the power of humans to do anything other than language prediction and sensory pattern prediction.
You either an llm, or don’t know how your brain works.
LLMs don’t know how how they work