that’s very true, I’m just saying this paper did not eliminate the possibility and is thus not as significant as it sounds. If they had accomplished that, the bubble would collapse, this will not meaningfully change anything, however.
also, it’s not as unreasonable as that because these are automatically assembled bundles of simulated neurons.
This is a knock them down paper by Apple justifying (to their shareholders) their non investment in LLMs. It is not a build them up paper trying for meaningful change and to create a better AI.
That’s not the only way to make meaningful change, getting people to give up on llms would also be meaningful change. This does very little for anyone who isn’t apple.
that’s very true, I’m just saying this paper did not eliminate the possibility and is thus not as significant as it sounds. If they had accomplished that, the bubble would collapse, this will not meaningfully change anything, however.
also, it’s not as unreasonable as that because these are automatically assembled bundles of simulated neurons.
This paper does provide a solid proof by counterexample of reasoning not occuring (following an algorithm) when it should.
The paper doesn’t need to prove that reasoning never has or will occur. It’s only demonstrates that current claims of AI reasoning are overhyped.
It does need to do that to meaningfully change anything, however.
Other way around. The claimed meaningful change (reasoning) has not occurred.
Meaningful change is not happening because of this paper, either, I don’t know why you’re playing semantic games with me though.
I’m trying to highlight the goal of this paper.
This is a knock them down paper by Apple justifying (to their shareholders) their non investment in LLMs. It is not a build them up paper trying for meaningful change and to create a better AI.
That’s not the only way to make meaningful change, getting people to give up on llms would also be meaningful change. This does very little for anyone who isn’t apple.