• The Octonaut@mander.xyz
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      They don’t need to scrape Lemmy. They just need a federated instance and then they have literally everything you post delivered to them as part of the way Lemmy is designed.

      Please understand literally nothing on Lemmy is private.

        • borari@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          4 months ago

          Put a public pgp key in your profile bio, then you can actually send true end to end encrypted messages over insecure public channels.

          A very similar conversation led to a joke chain of pgp encrypted replies between me and some other rando on Reddit a few years ago. We were both banned.

    • thetreesaysbark@sh.itjust.works
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      4 months ago

      I’d say it’s not the LLM at fault. The LLM is essentially an innocent. It’s the same as a four year old being told if they clap hard enough they’ll make thunder. It’s not the kids fault that they’re being fed bad information.

      The parents (companies) should be more responsible about what they tell their kids (LLMS)

      Edit. Disregard this though if I’ve completely misunderstood your comment.

      • Carighan Maconar@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        Yeah that’s my point, too. AI employing companies should be held responsible for the stuff their AIs say. See how much they like their AI hype when they’re on the hook for it!

      • xmunk@sh.itjust.works
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        I mean - I don’t think anyone’s solution to this issue would be to put an AI on trial… but it’d be extremely reasonable to hold Google responsible for any potential damages from this and I think it’d also be reasonable to go after the organization that trained this model if they marketed it as an end-user ready LLM.

      • thehatfox@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        4 months ago

        I’d say it’s more that parents (companies) should be more responsible about what they tell their kids (customers).

        Because right now the companies have a new toy (AI) that they keep telling their customers can make thunder from clapping. But in reality the claps sometimes make thunder but are also likely to make farts. Occasionally some incredibly noxious ones too.

        The toy might one day make earth-rumbling thunder reliably, but right now it can’t get close and saying otherwise is what’s irresponsible.

      • xantoxis@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        Sorry, I didn’t know we might be hurting the LLM’s feelings.

        Seriously, why be an apologist for the software? There’s no effective difference between blaming the technology and blaming the companies who are using it uncritically. I could just as easily be an apologist for the company: not their fault they’re using software they were told would produce accurate information out of nonsense on the Internet.

        Neither the tech nor the corps deploying it are blameless here. I’m well aware than an algorithm only does exactly what it’s told to do, but the people who made it are also lying to us about it.

        • barsoap@lemm.ee
          link
          fedilink
          arrow-up
          0
          ·
          edit-2
          4 months ago

          Sorry, I didn’t know we might be hurting the LLM’s feelings.

          You’re not going to. CS folks like to anthropomorphise computers and programs, doesn’t mean we think they have feelings.

          And we’re not the only profession doing that, though it might be more obvious in our case. A civil engineer, when a bridge collapses, is also prone to say “is the cable at fault, or the anchor” without ascribing feelings to anything. What it is though is ascribing a sort of animist agency which comes natural to many people when wrapping their head around complex systems full of different things well, doing things.

          The LLM is, indeed, not at fault. The LLM is a braindead cable anchor that some idiot, probably a suit, put in a place where it’s bound to fail.

    • gregorum@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      4 months ago

      also suggesting that those beans could have been found in a nebula, dick

      • jonne@infosec.pub
        link
        fedilink
        arrow-up
        0
        ·
        4 months ago

        You don’t have time to be depressed when you’re trying to fix xorg.conf. (yeah, I know, super dated reference, Linux is actually so good these days I can’t find an equivalent joke).

        • BeigeAgenda@lemmy.ca
          link
          fedilink
          arrow-up
          0
          ·
          4 months ago

          Editing grub.cfg from an emergency console, or running grub-update from a chroot is a close second.

          Adding the right Modeline to xorg.conf seemed more like magic when it worked. 🧙🏼

        • KISSmyOSFeddit@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          4 months ago

          I’ve seen things you people wouldn’t believe… CRTs on fire from an xorg.conf typo… I configured display servers in the dark cause the screen was black… All those moments will be lost in time, like tears in rain… Time to switch to Wayland.

          • ikidd@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            4 months ago

            You haven’t lived until you’ve compiled a 3com driver in order to get token ring connectivity so you can download the latest kernel source that has that new ethernet thing in it.

  • thezeesystem@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    4 months ago

    Idk seems more helpful then the suicide hotline number. Called them many times for them to tell me generic same information and often times hug up on if I started to cry.

  • lolola@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    0
    ·
    4 months ago

    The thing these AI goons need to realize is that we don’t need a robot that can magically summarize everything it reads. We need a robot that can magically read everything, sort out the garbage, and summarize the the good parts.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      0
      ·
      4 months ago

      Yeah, when Google starts trying to manipulate the meaning of results in it’s favour, instead of just traffic, things will be at a whole other level of scary.