Ok let’s give a little bit of context. I will turn 40 yo in a couple of months and I’m a c++ software developer for more than 18 years. I enjoy to code, I enjoy to write “good” code, readable and so.

However since a few months, I become really afraid of the future of the job I like with the progress of artificial intelligence. Very often I don’t sleep at night because of this.

I fear that my job, while not completely disappearing, become a very boring job consisting in debugging code generated automatically, or that the job disappear.

For now, I’m not using AI, I have a few colleagues that do it but I do not want to because one, it remove a part of the coding I like and two I have the feeling that using it is cutting the branch I’m sit on, if you see what I mean. I fear that in a near future, ppl not using it will be fired because seen by the management as less productive…

Am I the only one feeling this way? I have the feeling all tech people are enthusiastic about AI.

  • bruhduh@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    Imagine it’s like having intern under you that helping you with everything, quality of the code will still be on you regardless

  • z00s@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    It won’t replace coders as such. There will be devs who use AI to help them be more productive, and there will be unemployed devs.

  • kent_eh@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    Have you seen the shit code it confidently spews out?

    I wouldn’t be too worried.

    • fievel@lemm.eeOP
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      Well I seen, I even code reviewed without knowing, when I asked colleague what happened to him, he said “I used chatgpt, I’m not sure to understand what this does exactly but it works”. Must confess that after code review comments, not much was left of the original stuff.

      • Adalast@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        8 months ago

        If I am going to poke small holes in the argument, the exact same thing happens every day when coders google a problem and find a solution on Stack Exchange or the like and copy/paste it into the code without understanding what it does. Yes, it was written initially by someone who understood it, but the end result is the exact same. Code that was implemented without understanding the inner workings.

          • Adalast@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            8 months ago

            Really? I haven’t done the ChatGPT thing, but I know I have spent days searching for solutions to some of the more esoteric problems I run into. I can’t imagine that asking an AI then debugging the return would be any more intensive as long as the AI solution functioned enough to be a starting point.

            • knightly the Sneptaur@pawb.social
              link
              fedilink
              arrow-up
              0
              ·
              edit-2
              8 months ago

              That’s the thing, how do you determine whether or not the “AI solution functions enough” without having a human review it?

              The economics aren’t there because LLM outputs aren’t trustworthy, and the kind of expertise you’d need to validate them is functionally equivalent to that which could be employed to write the code in the first place.

              “Generative AI” is an inefficient solution to a problem that’s already been solved by the existence of coding support forums like StackOverflow. Sure, it can be neat to ask it for example code or a bedtime story, but once the novelty wears off all you’re left with is an expensive plagirism machine that won’t even notice when it confidently lies to you.

  • TheMurphy@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    You seem like the guy who kept writing in hand and didn’t want to use the typewriter, even though it went 2x times faster.

    Or the guy who kept writing the typewriter and didn’t want to use the computer.

    You see the point?

  • bloopernova@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    There’s a massive amount of hype right now, much like everything was blockchains for a while.

    AI/ML is not able to replace a programmer, especially not a senior engineer. Right now I’d advise you do your job well and hang tight for a couple of years to see how things shake out.

    (me = ~50 years old DevOps person)

    • DrQuint@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      8 months ago

      I’m only on my very first year of DevOps, and already I have five years worth of AI giving me hilarious, sad and ruinous answers regarding the field.

      I needed proper knowledge of Ansible ONCE so far, and it managed to lie about Ansible to me TWICE. AI is many things, but an expert system it is not.

    • DonWito@lemmy.techtailors.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      Great advice. I would add to it just to learn leveraging those tools effectively. They are great productivity boost. Another side effect once they become popular is that some skills that we already have will be harder to learn so they might be in higher demand.

      Anyway, make sure you put aside enough money to not have to worry about such things 😃

  • BolexForSoup@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    Thought about this some more so thought I’d add a second take to more directly address your concerns.

    As someone in the film industry, I am no stranger to technological change. Editing in particular has radically changed over the last 10 to 20 years. There are a lot of things I used to do manually that are now automated. Mostly what it’s done is lower the barrier to entry and speed up my job after a bit of pain learning new systems.

    We’ve had auto-coloring tools since before I began and colorists are still some of the highest paid folks around. That being said, expectations have also risen. Good and bad on that one.

    Point is, a lot of times these things tend to simplify/streamline lower level technical/tedious tasks and enable you to do more interesting things.

  • sosodev@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    8 months ago

    You’re certainly not the only software developer worried about this. Many people across many fields are losing sleep thinking that machine learning is coming for their jobs. Realistically automation is going to eliminate the need for a ton of labor in the coming decades and software is included in that.

    However, I am quite skeptical that neural nets are going to be reading and writing meaningful code at large scales in the near future. If they did we would have much bigger fish to fry because that’s the type of thing that could very well lead to the singularity.

    I think you should spend more time using AI programming tools. That would let you see how primitive they really are in their current state and learn how to leverage them for yourself. It’s reasonable to be concerned that employees will need to use these tools in the near future. That’s because these are new, useful tools and software developers are generally expected to use all tooling that improves their productivity.

    • kellyaster@kbin.social
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      I think you should spend more time using AI programming tools. That would let you see how primitive they really are in their current state and learn how to leverage them for yourself.

      I agree, sosodev. I think it would be wise to at least be aware of modern A.I.'s current capabilities and inadequacies, because honestly, you gotta know what you’re dealing with.

      If you ignore and avoid A.I. outright, every new iteration will come as a complete surprise, leaving you demoralized and feeling like shit. More importantly, there will be less time for you to adapt because you’ve been ignoring it when you could’ve been observing and planning. A.I. currently does not have that advantage, OP. You do.

    • mozz@mbin.grits.dev
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      If they did we would have much bigger fish to fry because that’s the type of thing that could very well lead to the singularity.

      Bingo

      I won’t say it won’t happen soon. And it seems fairly likely to happen at some point. But at that point, so much of the world will have changed because of the other impacts of having AI, as it was developing to be able to automate thousands of things that are easier than programming, that “will I still have my programming job” may well not be the most pressing issue.

      For the short term, the primary concern is programmers who can work much faster with AI replacing those that can’t. SOCIAL DARWINISM FIGHT LET’S GO

  • mozz@mbin.grits.dev
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    I think all jobs that are pure mental labor are under threat to a certain extent from AI.

    It’s not really certain when real AGI is going to start to become real, but it certainly seems possible that it’ll be real soon, and if you can pay $20/month to replace a six figure software developer then a lot of people are in trouble yes. Like a lot of other revolutions like this that have happened, not all of it will be “AI replaces engineer”; some of it will be “engineer who can work with the AI and complement it to be produtive will replace engineer who can’t.”

    Of course that’s cold comfort once it reaches the point that AI can do it all. If it makes you feel any better, real engineering is much more difficult than a lot of other pure-mental-labor jobs. It’ll probably be one of the last to fall, after marketing, accounting, law, business strategy, and a ton of other white-collar jobs. The world will change a lot. Again, I’m not saying this will happen real soon. But it certainly could.

    I think we’re right up against the cold reality that a lot of the systems that currently run the world don’t really care if people are taken care of and have what they need in order to live. A lot of people who aren’t blessed with education and the right setup in life have been struggling really badly for quite a long time no matter how hard they work. People like you and me who made it well into adulthood just being able to go to work and that be enough to be okay are, relatively speaking, lucky in the modern world.

    I would say you’re right to be concerned about this stuff. I think starting to agitate for a better, more just world for all concerned is probably the best thing you can do about it. Trying to hold back the tide of change that’s coming doesn’t seem real doable without that part changing.

    • taladar@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      8 months ago

      It’s not really certain when real AGI is going to start to become real, but it certainly seems possible that it’ll be real soon

      What makes you say that? The entire field of AI has not made any progress towards AGI since its inception and if anything the pretty bad results from language models today seem to suggest that it is a long way off.

      • mozz@mbin.grits.dev
        link
        fedilink
        arrow-up
        0
        ·
        8 months ago

        You would describe “recognizing handwritten digits some of the time” -> “GPT-4 and Midjourney” as no progress in the direction of AGI?

        It hasn’t reached AGI or any reasonable facsimile yet, no. But up until a few years ago something like ChatGPT seemed completely impossible, and then a few big key breakthroughs happened, and now the impossible is possible. It seems by no means out of the question that a few more big breakthroughs could happen with AGI, especially with as much attention and effort is going into the field now.

        • jacksilver@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          8 months ago

          It’s not that machine learning isn’t making progress, it’s just many people speculate that AGI will require a different way of looking at AI. Deep Learning, while powerful, doesn’t seem like it can be adapted to something that would resemble AGI.

          • mozz@mbin.grits.dev
            link
            fedilink
            arrow-up
            0
            ·
            8 months ago

            You mean, it would take some sort of breakthrough?

            (For what it’s worth, my guess about how it works is to generally agree with you in terms of real sentience – just that I think (a) neither one of us really knows that for sure (b) AGI doesn’t require sentience; a sufficiently capable fakery which still has limitations can still upend the world quite a bit).

            • jacksilver@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              8 months ago

              Yes, and most likely more of a paradigm shift. The way deep learning models work is largely around static statistical models. The main issue here isn’t the statistical side, but the static nature. For AGI this is a significant hurdle because as the world evolves, or simply these models run into new circumstances, the models will fail.

              Its largely the reason why autonomous vehicles have sorta hit a standstill. It’s the last 1% (what if an intersection is out, what if the road is poorly maintained, etc.) that are so hard for these models as they require “thought” and not just input/output.

              LLMs have shown that large quantities of data seem to approach some sort of generalized knowledge, but researchers don’t necessarily agree on that https://arxiv.org/abs/2206.07682. So if we can’t get to more emergent abilities, it’s unlikely AGI is on the way. But as you said, combining and interweaving these systems may get something close.

  • infinitepcg@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    Nobody knows if and when programming will be automated in a meaningful way. But once we have the tech to do it, we can automate pretty much all work. So I think this will not be a problem for programmers until it’s a problem for everyone.

  • howrar@lemmy.ca
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    8 months ago

    If your job truly is in danger, then not touching AI tools isn’t going to change that. The best you can do for yourself is to explore what these tools can do for you and figure out if they can help you become more productive so that you’re not first on the chopping block. Maybe in doing so, you’ll find other aspects of programming that you enjoy just as much and don’t yet get automated away with these tools. Or maybe you’ll find that they’ll not all they’re hyped up to be and ease your worry.

  • MajorHavoc@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    I’m both unenthusiastic about A.I. and unafraid of it.

    Programming is a lot more than writing code. A programmer needs to setup a reliable deployment pipeline, or write a secure web-facing interface, or make a useable and accessible user interface, or correctly configure logging, or identity and access, or a million other nuanced, pain-in-the-ass tasks. I’ve heard some programmers occasionally decrypt what the hell the client actually wanted, but I think that’s a myth.

    The history of automation is somebody finds a shortcut - we all embrace it - we all discover it doesn’t really work - someone works their ass off on a real solution - we all pay a premium for it - a bunch of us collaborate on an open shared solution - we all migrate and focus more on one of the 10,000 other remaining pain-in-the-ass challenges.

    A.I. will get better, but it isn’t going to be a serious viable replacement for any of the real work in programming for a very long time. Once it is, Murphy’s law and history teaches us that there’ll be plenty of problems it still sucks at.

  • Kbin_space_program@kbin.social
    link
    fedilink
    arrow-up
    0
    ·
    8 months ago

    As an example:

    Salesforce has been trying to replace developers with “easy to use tools” for a decade now.

    They’re no closer than when they started. Yes the new, improved flow builder and omni studio look great initially for the simple little preplanned demos they make. But theyre very slow, unsafe to use and generally are impossible to debug.

    As an example: a common use case is: sales guy wants to create an opportunity with a product. They go on how omni studio let’s an admin create a set of independently loading pages that let them:
    • create the opportunity record, associating it with an existing account number.
    • add a selection of products to it.

    But what if the account number doesn’t exist? It fails. It can’t create the account for you, nor prompt you to do it in a modal. The opportunity page only works with the opportunity object.

    Also, if the user tries to go back, it doesn’t allow them to delete products already added to the opportunity.

    Once we get actual AIs that can do context and planning, then our field is in danger. But so long as we’re going down the glorified chatbot route, that’s not in danger.

  • arthur@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    Man, it’s a tool. It will change things for us, it is very powerful; but still a tool. It does not “know” anything, there’s no true intelligence in the things we now call “AI”. For now, is really useful as a rubber duck, it can make interesting suggestions, make you explore big code bases faster, and even be useful for creating boilerplate. But the code it generates usually is not very trustworthy and have lower quality.

    The reality is not that we will lose our jobs to it, but that companies will expect more productivity from us using these tools. I recommend you to try ChatGPT (the best in class for now), and try to understand it’s strengths and limitations.

    Remember: this is just an autocomplete on steroids, that do more the the regular version, but that get the same type of errors.