We’re all seeing the breathless hype surrounding the vacuous marketing term. It’ll change everything! It’s coming for our jobs! Some 50% of white-collar workers will be laid off!

Setting aside “and how will it do that?” as outside the scope of the topic at hand, it’s a bit baffling to me how a nebulous concept prone to outright errors is an existential threat. (To be clear, I think the energy and water impacts are.)

I was having a conversation on Reddit along these lines a couple of days ago, and after seeing more news that just parrots Altman’s theme-du-jour, I need a sanity check.

Something I’ve always found hilarious at work is someone asking if you have a calculator (I guess that dates me to the flip-phone era) … my canned response was “what’s wrong with the very large one on your desk?”

Like, automation is literally why we have these machines.

And it’s worth noting that you can’t automate the interesting parts of a job, as those are creative. All you can tackle is the rote, the tedious, the structured bullshit that no one wants to do in the first place.

But here’s the thing: I’ve learned over the decades that employers don’t want more efficiency. They shout it out to the shareholders, but when it comes down to the fiefdoms of directors and managers, they like inefficiency, thank you very much, as it provides tangible work for them.

“If things are running smoothly, why are we so top heavy” is not something any manager wants to hear.

Whatever the fuck passes for “AI” in common parlance can’t threaten management in the same way as someone deeply familiar with the process and able to code. So it’s anodyne … not a threat to the structure. Instead of doubling efficiency via bespoke code (leading to a surplus of managers), just let a couple people go through attrition or layoffs and point to how this new tech is shifting your department’s paradigm.

Without a clutch.

I’ve never had a coding title, but I did start out in CS (why does this feel like a Holiday Inn Express ad?), so regardless of industry, when I end up being expected to use an inefficient process, my first thought is to fixing it. And it has floored me how severe the pushback is.

I reduced a team of 10 auditors to five at an audiobook company with a week of coding in VB. A team of three placing ads to 0.75 (with two of us being me and my girlfriend) at a newspaper hub.

Same hub, clawed back 25% of my team’s production time after absurd reporting requirements were implemented despite us having all the timestamps in our CMS – the vendor charged extra to access our own data, so management decided a better idea than paying the vendor six figures was overstaff by 33% (250 total at the center) to get those sweet, sweet self-reported error-laden data!

At a trucking firm, I solved a decadelong problem with how labour-intensive receiving for trade shows was. Basically, instead of asking the client for their internal data, which had been my boss’ approach, I asked how much they really needed from us, and could I simplify the forms and reports (samples provided)? Instant yes, but my boss hated the new setup because I was using Microsoft Forms to feed Excel, and then a 10-line script to generate receivers and reports, and she didn’t understand any of that, so how was she sure I knew what I was doing?

You can’t make this shit up.

Anyway, I think I’ve run far afield of my central thesis, but I think these illustrations point to a certain intransigence at the management level that will be far more pronounced than is being covered.

These folks locked in their 2.9% mortgage and don’t want to rock the boat.

My point is, why would management suddenly be keen on making themselves redundant when decades of data tell us otherwise?

This form of “AI” does not subvert the dominant paradigm. And no boss wants fewer employees.

As such, who’s actually going to get screwed here? The answer may surprise you.

  • Powderhorn@beehaw.orgOP
    link
    fedilink
    English
    arrow-up
    1
    ·
    12 hours ago

    I’m not saying anything of the sort. You’re reading shit you wanted to into this.

    OK, as you have all the answers: What is making people fearful?

    • teawrecks@sopuli.xyz
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      11 hours ago

      Hold up, what did I read into it? I directly quoted you and asked for clarification on whether you currently believe that is the state of AI, or whether you’re saying that’s what automation used to be.

      If you’re saying that’s what automation used to be, then we agree. But if you believe that modern AI can only do the “tedious bullshit no one wants to do”, that’s literally not the case.

      Sora 2 is generating realistic video of anything you want given just a text prompt, rivaling the best VFX artists.

      Hollywood is currently clamoring to “work with” AI celebrities who don’t exist, with a synthetic voice, singing songs no one composed with lyrics generated by an LLM. Why give a cut to a pop artist or band if you can synthesize it from nothing?

      The education system has been completely upturned because every assignment can be completed by an AI, and there’s no way for the teacher to detect it. And it’s having a measurably damaging effect on students’ intellect.

      A popular quote floating around right now is, “I want AI to do my laundry and dishes so that I can do art and writing, not for AI to do my art and writing so that I can do my laundry and dishes.”

      And right now I literally can’t know if someone is running an AI with the prompt: “respond to this comment as though you are an out of touch older American who still thinks the capabilities of generative AI are limited to simple automation of tedious tasks no one wants to do anyway.” And you don’t know if I’m an AI with the prompt, “respond to this comment like a condescending tech literate young adult who is afraid of the impact that generative AI owned and funded by an oligarchy is going to have on every aspect of their future.”

      I honestly feel stupid even bothering to type any of this out. I’m surely being had.

      • Powderhorn@beehaw.orgOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 hours ago

        If you truly believe LLMs are transformative, that’s on you. I’d perhaps not believe people going forward, as they tend to lie.

        • Crotaro@beehaw.org
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          2 hours ago

          I would love for it to be different, but they’re mostly right.

          The hardcore shareholders, who probably have shares in more than one company and for sure only see these companies for their monetary value and nothing more, would not care if the company’s creative work featured AI giveaways like twelve-fingered people occasionally and inconsistent storylines, if it would mean they could save on all their artists salary by paying only for one AI subscription.

          Yes, you can still tell (mostly) when something’s made by AI, but the fact is that we already do see creatives being replaced with AI, leaving them free to do dishes and laundry instead of the other way around. The Coca Cola AI ads are one prominent example. Executives and shareholders don’t care about their product being inferior if it means it saves them even 20% in expenses. And we both know that replacing all your creative team (often even just one or two) with AI is a bigger saving on “Creative expenses” than just 20%. We know that because we can literally look up salaries vs subscription price for stuff like Sora and Veo3.

          Yet, contrary to what I perceive as your main argument here, we don’t see widespread adoption of AI in all kinds of companies to do the tedious labor. That seems to still be done often either by traditional methods, because LLMs and generative AI is just not good at repairing a leak in toilets or checking for damages in a factory or welding or even just pushing a button to announce break-time.

          Edit: spellings