I’'m curious about the strong negative feelings towards AI and LLMs. While I don’t defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.
Especially in coding?
Actually, that’s where they are the least suited. Companies will spend more money on cleaning up bad code bases (not least from a security point of view) than is gained from “vibe coding”.
Audio, art - anything that doesn’t need “bit perfect” output is another thing though.
There’s also the issue of people now flooding the internet with AI generated tutorials and documentation, making things even harder. I managed to botch the Linux on my Raspberry Pi so hard I couldn’t fix it easily, all thanks to a crappy AI generated tutorial on adding to path that I didn’t immediately spot.
With art, it can’t really be controlled enough to be useful for anything much beyond spam machine, but spammers only care about social media clout and/or ad revenue.
and also chatbot-generated bug reports (like curl) and entire open source projects (i guess for some stupid crypto scheme)
But but, now idea man can vibecode. this shit destroys separation between management and codebase making it perfect antiproductivity tool