• PushButton@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    2 days ago

    And yet everybody is selling to write code.

    The last time I checked, coding was requiring logic.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 days ago

      To be fair, a decent chunk of coding is stupid boilerplate/minutia that varies environment to environment, language to language, library to library.

      So LLM can do some code completion, filling out a bunch of boilerplate that is blatantly obvious, generating the redundant text mandated by certain patterns, and keeping straight details between languages like “does this language want join as a method on a list with a string argument, or vice versa?”

      Problem is this can be sometimes more annoying than it’s worth, as miscompletions are annoying.

      • PushButton@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        Fair point.

        I liked the “upgraded autocompletion”, you know, an completion based on the context, just before the time that they pushed it too much with 20 lines of non sense…

        Now I am thinking of a way of doing the thing, then I receive a 20 lines suggestion.

        So I am checking if that make sense, losing my momentum, only to realize the suggestion us calling shit that don’t exist…

        Screw that.

        • merdaverse@lemm.ee
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          The amount of garbage it spits out in autocomplete is distracting. If it’s constantly making me 5-10% less productive the many times it’s wrong, it should save me a lot of time when it is right, and generally, I haven’t found it able to do that.

          Yesterday I tried to prompt it to change around 20 call sites for a function where I had changed the signature. Easy, boring and repetitive, something that a junior could easily do. And all the models were absolutely clueless about it (using copilot)

      • lambalicious@lemmy.sdf.org
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        2 days ago

        a decent chunk of coding is stupid boilerplate/minutia that varies

        …according to a logic, which means LLMs are bad at it.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 days ago

          I’d say that those details that vary tend not to vary within a language and ecosystem, so a fairly dumb correlative relationship is enough to generally be fine. There’s no way to use logic to infer that it’s obvious that in language X you need to do mylist.join(string) but in language Y you need to do string.join(mylist), but it’s super easy to recognize tokens that suggest those things and a correlation to the vocabulary that matches the context.

          Rinse and repeat for things like do I need to specify type and what is the vocabulary for the best type for a numeric value, This variable that makes sense is missing a declaration, does this look to actually be a new distinct variable or just a typo of one that was declared.

          But again, I’m thinking mostly in what kind of sort of can work, my experience personally is that it’s wrong so often as to be annoying and get in the way of more traditional completion behaviors that play it safe, though with less help particularly for languages like python or javascript.

    • Schadrach@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      A lot of writing code is relatively standard patterns and variations on them. For most but the really interesting parts, you could probably write a sufficiently detailed description and get an LLM to produce functional code that does the thing.

      Basically for a bunch of common structures and use cases, the logic already exists and is well known and replicated by enough people in enough places in enough languages that an LLM can replicate it well enough, like literally anyone else who has ever written anything in that language.