

So far, there is serious cognitive step needed that LLM just can’t do to get productive. They can output code but they don’t understand what’s going on. They don’t grasp architecture. Large projects don’t fit on their token window.
There’s a remarkably effective solution for this, that helps both humans and models alike - write documentation.
It’s actually kind of funny how the LLM wave has sparked a renaissance of high-quality documentation. Who would have thought?









Complete hands-off no-review no-technical experience vibe coding is obviously snake oil, yeah.
This is a pretty large problem when it comes to learning about LLM-based tooling: lots of noise, very little signal.