Okay I won’t even read past the first paragraph because you’re so incredibly wrong that it hurts.
First generation LLMs were bad at writing long batches of code, today we’re on the fourth (or by some metric, fifth) generation.
I’ve trained LLM agents on massive codebases that resulted in <0.1% fault ratio on first pass. Besides, tool calling is a thing, but I guess if I started detailing how MCP servers work and how they can be utilised to ensure an LLM agents doesn’t do incorrect calls, you’d come up with another 2-3 year old argument that simply doesn’t have a foot to stand on today.
Okay I won’t even read past the first paragraph because you’re so incredibly wrong that it hurts.
First generation LLMs were bad at writing long batches of code, today we’re on the fourth (or by some metric, fifth) generation.
I’ve trained LLM agents on massive codebases that resulted in <0.1% fault ratio on first pass. Besides, tool calling is a thing, but I guess if I started detailing how MCP servers work and how they can be utilised to ensure an LLM agents doesn’t do incorrect calls, you’d come up with another 2-3 year old argument that simply doesn’t have a foot to stand on today.
lol if you had read the rest of my post you would have seen I admitted you might be right. But go off, I guess.