cross-posted from: https://lemmy.zip/post/49954591
“No Duh,” say senior developers everywhere.
The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.
Then there’s the issue of finding an agreed-upon way of tracking productivity gains, a glaring omission given the billions of dollars being invested in AI.
To Bain & Company, companies will need to fully commit themselves to realize the gains they’ve been promised.
“Fully commit” to see the light? That… sounds more like a kind of religion, not like critical or even rational thinking.
It’s been clear that the best use of AI in a professional environment is as an assistant.
I don’t want something doing my job for me. I just want it to help me find something or to point out possible issues.
Of course, AI isn’t there yet. It doesn’t like reading through multiple large files. It doesn’t “learn” from you and what you’re doing, only what it’s “learned” before. It can’t pick up on your patterns over time. It doesn’t remember what your various responsibilities are. If I work in a file today, it’s not going to remember in a month when I work on it again.
And it might never get there. We’ve been rapidly approaching the limits of AI with two major problems. First, scaling is becoming exponential. Doubling the training data and computing resources won’t produce a model that’s twice as good. Second, overtraining is now a concern. We’re discovering that models can produce worse results if they receive too much training data.
And, obviously, it’s terrible for the environment and a waste of resources and electricity.