• kahdbrixk@feddit.org
    link
    fedilink
    Deutsch
    arrow-up
    1
    ·
    3 hours ago

    Meanwhile, given the Trump administration’s anti-regulation stance on the matter, companies like OpenAI are unlikely to be strongly motivated to implement effective guardrails to keep their AIs in check.

    Hey dont worry, capitalism will save us all. The market will just regulate itself, cause I mean who would want the end of the world? Of course all ai companies will implement “guardrails” because of their implicit wish to keep us and themselves safe.

    And no tech bro with half a brain would ever experiment carelessly towards the means of getting more famous and or rich.

    Never.

  • Omega (she/her)@piefed.blahaj.zone
    link
    fedilink
    English
    arrow-up
    23
    ·
    edit-2
    13 hours ago

    As weird as this sound as a sales pitch, this is a sales pitch. This is just AI hype and nothing else. The only way AI is gonna kill us all is if we put it in charge of absolutely everything and it will accidentally kill us all by its sheer incompetence because AI is not AI, it’s just a bunch of glorified algorithm strapped together with molten shit and duct tape.

    These tech bros are just high on their own fucking copium, imagining that fucking Skynet is about to happen, when they at best just built Wheatley. AI is not gonna kill us all. We’re quite fine on that front. However, these tech bros and capitalists making the decisions will.

    Still, though, they might be right on not saving for their retirement if they continue doing these kinds of shit, because… Well, right now data centers for hallucination machines have prioritized access to water over people, it’s taking away jobs left and right, it’s scamming people… Let’s just say, I’m afraid (I hope) people may “lose patience” eventually. 🙃

    • Snot Flickerman@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      12 hours ago

      Seriously. Even if an AI gets “nuclear codes” how the fuck are they going to jump into an air gapped system with about a 60th of the CPU cycles and RAM they need to operate? This doomerism is fucking unhinged and literally requires pretending that AI can somehow just jump into any old computer and that we totally don’t have massive server farms just to have them function at all with high-level computers supporting them with high speed CPUs and massive amounts of RAM and storage. They don’t have a fucking body, for one thing, and for two no current iterations do anything at all like thinking, they all have to he prompted. They don’t start conversations on their own.

      It’s so fucking stupid, it’s the fantasies of childish fucking nerds.

      • Jason2357@lemmy.ca
        link
        fedilink
        arrow-up
        8
        ·
        12 hours ago

        I’ve come to equate AI doomerism with just being cloaked AI boosting. It’s just another way to convince people investors that AI is as powerful as claimed. It’s not.

      • kiagam@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        12 hours ago

        I am not worried abou the launch codes, I’m worried some incompetent middle manager will replace key staff at a water treatment plant or something (because the CEO told them to put AI somewhere or be fired). Then the fucking AI will have a meltdown after it reads data 10 times in a row and will poison the water supply of millions. Nobody will catch it because quality control analysis was also replaced by the same AI and it decided to delete the alert system for some reason (it had to do something, that is how it works).

        I’m not worried about AI being good, I’m worried about the people that think it is good (which already proves they are stupid af) giving it a shot at anything critical

  • Harvey656@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    12 hours ago

    God please let ai do it efficiently. Fucjing bullet for everyone’s noggin and be done with it, way more humane than this shit.