The claim is not that all LLMs are agents, but rather that agents (which incorporate an LLM as one of their key components) are more powerful than an LLM on its own.
We don’t know how far away we are from recursive self-improvement. We might already be there to be honest; how much of the job of an LLM researcher can already be automated? It’s unclear if there’s some ceiling to what a recursively-improved GPT4.x-w/e can do though; maybe there’s a key hypothesis it will never formulate on the quest for self-improvement.
The claim is not that all LLMs are agents, but rather that agents (which incorporate an LLM as one of their key components) are more powerful than an LLM on its own.
We don’t know how far away we are from recursive self-improvement. We might already be there to be honest; how much of the job of an LLM researcher can already be automated? It’s unclear if there’s some ceiling to what a recursively-improved GPT4.x-w/e can do though; maybe there’s a key hypothesis it will never formulate on the quest for self-improvement.