cross-posted from: https://lemmy.zip/post/49954591

“No Duh,” say senior developers everywhere.

The article explains that vibe code often is close, but not quite, functional, requiring developers to go in and find where the problems are - resulting in a net slowdown of development rather than productivity gains.

Then there’s the issue of finding an agreed-upon way of tracking productivity gains, a glaring omission given the billions of dollars being invested in AI.

To Bain & Company, companies will need to fully commit themselves to realize the gains they’ve been promised.

“Fully commit” to see the light? That… sounds more like a kind of religion, not like critical or even rational thinking.

  • dumples@midwest.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 hours ago

    I always wondered how they got those original productivity claims. I assume they are counting everytime a programmer uses a AI suggestion. Seems like the way to get the highest markable number for a sales team. I know that when I use those suggestions occasionally they will be 100% correct and I won’t have to make any changes. More often than not it starts correct and then when it fills it adds things I don’t need or is wrong or isn’t fitting how I like to write my code. Then I have to delete and recreate it.

    The most annoying is when I think I am tabbing for autocomplete and then it just adds more code that I don’t need

  • super_user_do@feddit.it
    link
    fedilink
    arrow-up
    1
    ·
    6 hours ago

    I asked chatgpt for a function to parse text from a file in PHP. It gave me 200 lines of weird ahh code

    Then I looked the actual documentation and it took me 2 seconds bc someone in the comments gave me a better response

  • flatbield@beehaw.org
    link
    fedilink
    English
    arrow-up
    22
    ·
    edit-2
    14 hours ago

    I have a friend that is a professional programmer. They think AI will generate lots of work fixing the shit code it creates. I guess we will see.

    • tohuwabohu@programming.dev
      link
      fedilink
      arrow-up
      1
      ·
      5 hours ago

      Yes it will, while at the same time augmenting experienced developers that know what they’re doing. I evaluated Claude code for a month. Does it help building simple, well-defined tasks faster? Yes. Do I imagine it working well in a large scale project, maintained by multiple teams? Absolutely not.

      • blarghly@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 hours ago

        Idk, that was basically 90% of my last job. At least the ai code will be nicely formatted and use variable names longer than a single character.

        • Knock_Knock_Lemmy_In@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          6 hours ago

          Oh yes. Get the AI to refactor and make pretty.

          But I’ve just spent 3 days staring something that was missing a ().
          However, I admit that a human could have easily made the same error.

    • thingsiplay@beehaw.org
      link
      fedilink
      arrow-up
      8
      ·
      12 hours ago

      I actually think a new field of “real” programmers will emerge, in which they are specialized at looking for Ai problems. So companies using Ai and get rid of programmers, will start hiring programmers to get rid of Ai problems.

      • blarghly@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        8 hours ago

        I mean, more realistically… ai can’t really write code reliably, but if utilized appropriately it can write code faster than a developer on their own. And in this way, it is similar to every other kind of tooling we’ve created. And what we’ve seen in the past is that when developers get better tooling, the amount of available software work increases rather than decreases. Why? Because when it takes fewer developer hours to bring a product to market, it lowers the barrier to entry for trying to create a new product. It used to be that custom software was only written for large, rich institutions who would benefit from economies of scale. Now every beat up taco truck has its own website.

        And then, once all these products are brought to market, that code needs maintenance. Upgrades. New features. Bug fixes. Etc.

  • majster@lemmy.zip
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    4
    ·
    11 hours ago

    I’m a dev at a tech startup. Most devs at the company are pretty impressed by claude code and find it very useful. Hence the company has a pretty hefty budget allocated for it.

    What I need to do is think trough the problem at hand and claude will do the code->build->unit test cycles until it satisfies the objective. In the meantime I can drink cofee in peace and go to bathroom.

    To me and to many of my coworkers its a completley new work paradigm.

    • nic2555@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      9 hours ago

      Maybe I should try it to understand, but to me, this kind of feel like it would produce code that would not follow the company standards, code that will be harder to debug since the dev have little to no idea on how it work and code that is overall of less quality than the code produce by a dev that doesn’t use AI.

      And I would not trust those unit tests, since how can you be sure if they test the correct thing, if you never made them fail in the first place. A unit test that passes right away is not a test you should rely on.

      Don’t take it the wrong way, but if Claude write all of your code, doesn’t that make you more of a product owner than a dev ?

      • majster@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        6 hours ago

        sorry my comment mislead you, it’s not that hands off experience that you transform from dev to pm. Its more like a smart code monkey that helps you. I absolutely have to review almost all of the code but I’m sparred typing

        • nic2555@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          6 hours ago

          Thank you for clarifying. It does align more with the way I would use LLM in my day to day work then, which is quite reassuring.

          Even if it doesn’t work for me, I can still see the advantage of using AI assistant in those context. In the end, as long as you are doing the work required, the tools you use don’t really matters!

  • abbadon420@sh.itjust.works
    link
    fedilink
    arrow-up
    33
    ·
    17 hours ago

    They say the same about scrum.

    “It doesn’t work in you company, because you haven’t fully implemented all aspects of scrum”

    Coincidentally it costs about a gazillion dollars to become fully Scrum certified.

    • Knock_Knock_Lemmy_In@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      8 hours ago

      Scrum works because of 2 things.

      • Projects get simplified or abandoned much quicker

      • Tasks are assigned to the team, not the individual

      Everything else is entirely optional.

  • thingsiplay@beehaw.org
    link
    fedilink
    arrow-up
    19
    arrow-down
    1
    ·
    15 hours ago

    Billions of dollars are spent, unimaginable amount of power is used, ton of programmers are fired, million of millions code is copied without license and credit, nasty bugs and security issues are added due to trusting the ai system or being lazy. Was it worth it? Many programmers get disposable as they have to use ai. That means “all” programmers are the same and differ only in what model they use, at least that’s the future if everyone is using ai from now on.

    Ai = productivity increases, quality decreases… oh wait, Ai = productivity seems to increase, quality does decrease

    • dinckelman@programming.dev
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      11 hours ago

      This is just a very fucked reminder of that easy success never comes without a cost. Unfortunately, normal people paid that debt, while business majors continue feeding the pump and dump machine

        • blarghly@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 hours ago

          That’s literally not true at all. Developed nations enjoy unprecedented levels of wealth these days, while incomes have consistently been rising in developing nations for decades. If it were true, then for every person we have now on Lemmy shit posting, we would need someone else living on less substance than our paleolithic anscestors did. We can certainly argue about the overall distribution of the wealth that has been generated - but it is blatently obvious that higher standards of living do not imply that someone, somewhere else must be living in poverty.

  • Not a newt@piefed.ca
    link
    fedilink
    English
    arrow-up
    30
    ·
    17 hours ago

    “Fully commit” to see the light? That… sounds more like a kind of religion, not like critical or even rational thinking.

    It also gives these shovel peddlers an excuse: “Oh, you’re not seeing gains? Are you even lifting AI-ing, bro? You probably have some employees not using enough AI, you have to blame them instead of us.”

        • AnyOldName3@lemmy.world
          link
          fedilink
          arrow-up
          3
          ·
          11 hours ago

          It depends on the sense of wet that you’re using. Most of the time, the relevant kind of wet is how much water something contains, and water achieves peak theoretical wetness by that definition. It’s only in specific circumstances that the surface is coated evenly by a wetting agent definition is relevant, like painting or firefighting.

            • AnyOldName3@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              10 hours ago

              I know people who say exactly this kind of thing entirely seriously (potentially because they first saw it as an unlabelled joke that they took too seriously). Sometimes people are just incorrect pedants smugly picking fault with things that aren’t even wrong.

  • Ŝan@piefed.zip
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    9
    ·
    14 hours ago

    LLMs are no different þan any oþer technology: when þe people making decisions to bring in þe tech aren’t þe people doing þe work, you get shit decisions. You get LLMs, or Low Code/No Code platforms, or cloud migrations. Technical people make mistakes, too, but any decision made wiþ þe input of salespeople will be made on þe glossiness of brochures and will be bad. Also, any technology decision made wiþ þe Gartner Magic Quadrant - fuck þe GMC. Any decision process using it smells bad.

    • thingsiplay@beehaw.org
      link
      fedilink
      arrow-up
      2
      ·
      12 hours ago

      Well, there is a key difference of Ai compared to other technology: The ability to “think” and “decide” themselves. That’s the point of the tech. The problem is, that people “think” that’s true.