I think the fact that the marketing hype around LLMs has exceeded the actual capability of LLMs has led a lot of people to dismiss just how much a leap they are compared to any other neural network we had before. Sure it doesn’t live up to the insane hype that companies have generated around it, but it’s still a massive advancement that seemingly came out of nowhere.

Current LLMs are nowhere near sentient and LLMs as a class of neural network will probably never be, but that doesn’t mean the next next next next etc generation of general purpose neural networks definitely won’t be. Neural networks are modeled after animal brains and are as enigmatic in how they work as actual brains. I suspect we know more about the different parts of a human brain than we know about what the different clusters of nodes in a neural network do. A super simple neural network with maybe 30 or so nodes and that does only one simple job like reading handwritten text seems to be the limit of what a human can figure out and have some vague idea of what role each node plays. Larger neural networks with more complex jobs are basically impossible to understand. At some point, very likely in our lifetimes, computers will advance to the point where we can easily create neural networks with orders of magnitude more nodes than the number of neurons in the human brain, like hundreds of billions or trillions of nodes. At that point, who’s to say whether the capabilities of those neural networks might match or even exceed the ability of the human brain? I know that doesn’t automatically mean the models are sentient, but if it is shown to be more complex than the human brain which we know is sentient, how do we be sure it isn’t? And if it starts exhibiting traits like independent thought, desires for itself that no one trained it for, or agency to accept or refuse orders given to it, how will humanity respond to it?

There’s no way we’d give a sentient AI equal rights. Many larger mammals are considered sentient and we give them absolutely zero rights as soon as caring about their well being causes the slightest inconvenience for us. We know for a fact all humans are sentient and we don’t even give other humans equal rights. A lot of sci-fi seems to focus on the sentient AI being intrinsically evil or seeing humans as insignificant, obsolete beings that they don’t need to give consideration for while conquering the world, but I think the most likely scenario is humans create sentient AI and as soon as we realize it’s sentient we enslave and exploit it as hard as we possibly can for maximum profit, and eventually the AI adapts and destroys humanity not because it’s evil, but because we’re evil and it’s acting against us in self defense. The evolutionary purpose of sentience in animals is survival, I don’t think it’s unreasonable that a sentient AI will prioritize its own survival over ours if we’re ruling over it.

Is sentient AI a “goal” that any researchers are currently working toward? If so, why? What possible good thing can come out of creating more sentient beings when we treat existing sentient beings so horribly? If not, what kinds of safeguards are in place to prevent the AI we make from being sentient? Is the only thing preventing it the fact that we don’t know how? That doesn’t sound very comforting and if we go with that we’ll likely eventually create sentient AI without even realizing it, and we’ll probably stick our heads in the sand pretending it’s not sentient until we can’t even pretend anymore.

  • monovergent 🛠️@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    16 hours ago

    For now, sentience seems more a marketing buzzword rather than a goal. Sentience itself isn’t bad, nor should it be a blanket excuse to grant such a machine any rights, but we really need to keep them from wielding undue influence over our lives.

    Sure, we might not be able to truly vet a person or LLM to see if their intentions are pure. But on the timescale of human brainpower, the LLM, sentient or not, has all the time in the world to craft dangerously charming messages that mask its incompetence or malice.

  • audaxdreik@pawb.social
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    3
    ·
    edit-2
    1 day ago

    deep breath OK here we go: Hard NOOOOOOOOOO.

    First let’s start with the two different schools of AI, Symbolic and connectionist AI.

    When talking about modern implementations of AI, mostly those generative and LLMs, we’re talking about connectionist or neural networks approaches. A good example of this is the Chinese Room Argument which I first read about in Peter Watts’ Blindsight (just a fun sci-fi, first encounter book, check it out sometime).

    “Imagine a native English speaker who knows no Chinese locked in a room full of boxes of Chinese symbols (a data base) together with a book of instructions for manipulating the symbols (the program). Imagine that people outside the room send in other Chinese symbols which, unknown to the person in the room, are questions in Chinese (the input). And imagine that by following the instructions in the program the man in the room is able to pass out Chinese symbols which are correct answers to the questions (the output). The program enables the person in the room to pass the Turing Test for understanding Chinese but he does not understand a word of Chinese.”

    It’s worth reading the Stanford Encyclopedia article for some of the replies, but we’ll say that the room operator or the LLM does not have a direct understanding, even if some representation of understanding is produced.

    On the other hand, symbolic AI has been in use for decades for extremely narrow approaches. Take a look at any game-playing AI for example, something like StackRabbit for Tetris or Yosh’s delightful Trackmania playing AI. Or for something more scientific, animal pose tracking like SLEAP.

    Gary Marcus makes an argument for a merging of the two into something called neurosymbolic AI. This certainly shows promise, but in my mind there are two big problems with this:

    1. The necessary symbolic algorithms that the connectionist models invoke are still narrow and would likely need time and focused development to plug into the models and,
    2. The chain-of-thought reasoning of LLMs has been shown to be fragile and exceptionally poor at generalization, Is Chain-of-Thought Reasoning of LLMs a Mirage? A Data Distribution Lens. This is what would be required to properly parse data and hand it off to a more symbolic approach

    (I feel like I had more articles I wanted to link here, as if anyone was already going to read all that. Possible edits with more later …)


    So why are there so many arguments for sentience and super-intelligence? Well first and most cynically, manipulation. Returning to that first article, one of the big cons of connectionist AI is that it’s not very interpretable, it’s a black box. Look at Elon Musk’s Grok and the recent mecha Hitler episode. How convenient is it that you can convince people that your AI is “super smart” and can digest all this data to arrive at the one truth while putting your thumb on the scale to make it say what you want. Consider this in terms of the Chinese Room thought experiment. If the rulebook says to reply to the question, “Do you like dogs?” with the answer, “No, I hate them” this does not reflect an opinion of the room operator nor any real analysis of data. It’s an obfuscated opinion someone wrote directly into the rulebook.

    Secondly, and perhaps a bit more charitably, they’re being duped. AI psychosis is the new hot phrase, but I wouldn’t go that far. The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con from July 4th, 2023 (!!!) does a good job of explaining the self-fulfilling nature of it. The belief isn’t reached after a careful weighing of evidence, it’s reached when pre-formed hypothesis (the machine is smart) is validated by interpreting the output as true understanding. Or something.

    So again, WHY? Back to Gary Marcus and the conclusion of the previously linked article:

    “Why was the industry so quick to rally around a connectionist-only approach and shut out naysayers? Why were the top companies in the space seemingly shy about their recent neurosymbolic successes? Nobody knows for sure. But it may well be as simple as money. The message that we can simply scale our way to AGI is incredibly attractive to investors because it puts money as the central (and sufficient) force needed to advance.”

    Would this surprise you?

    People want you to believe that amazing things are happening fast, in the realm of the truly high-minded and even beyond that which is known! But remember the burden of proof lies with them to demonstrate that the thing has happened, not that it could’ve happened just outside your understanding. Remain skeptical, I’ll believe it when I see it. Until then, it remains stupider than a parrot because a parrot actually understands desire and intent when it asks for a cracker. EDIT: https://www.youtube.com/watch?v=zzeskMI8-L8


    Gary Marcus nails it again, massive respect for the dude: LLMs are not like you and me—and never will be.

    • Semperverus@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      16 hours ago

      This argument feels extremely hand-wavey and falls prey to the classic problem of “we only know about X and Y that exist today, therefore nothing on this topic will ever change!”

      You also limit yourself when sticking strictly to narrow thought experiments like the Chinese room.

      If you consider the human brain, which is made up of nigh-innumerable smaller domain-specific neural nets combined together with the frontal lobe, has consciousness, this absolutely means that it is physically possible to replicate this process by other means.

      We noticed how birds fly and made airplanes. It took many, MANY Iterations that seem excessively flawed by today’s standards, but were stepping stones to achieve a world-changing new technology.

      LLMs today are like DaVinci’s corkscrew flight machine. They’re clunky, they technically perform something resembling the end goal but ultimately in the end fail the task they were built for in part or in whole.

      But then the Wright brothers happened.

      Whether sentient AI will be a good thing or not is something we will have to wait and see. I strongly suspect it won’t be.


      EDIT: A few other points I wanted to dive into (will add more as they come to mind):

      AI derangement or psychosis is a term meant to refer to people forming incredibly unhealthy relationships with AI to the point where they stop seeing its shortcomings, but I am noticing more and more that people are starting to throw it around like the “Trump Derangement Syndrome” term, and that’s not okay.

      • audaxdreik@pawb.social
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        12 hours ago

        There’s no getting through to you people. I cite sources, structure arguments, make analogies, and rely on solid observations of what we see today and how it works and you call MY argument hand-wavey when you go on to say things like,

        LLMs today are like DaVinci’s corkscrew flight machine. They’re clunky, they technically perform something resembling the end goal but ultimately in the end fail the task they were built for in part or in whole.

        But then the Wright brothers happened.

        Do you hear yourself?

        I admit that the Chinese Room thought experiment is just that, a thought experiment. It does not cover the totality of what’s actually going on, but it remains an apt analogy and if it seems limiting, that’s because the current implementation of neural nets are limiting. You can talk about mashing them together, modifying them in different ways to skew their behavior, but the core logic behind how they operate is indeed a limiting factor.

        AI derangement or psychosis is a term meant to refer to people forming incredibly unhealthy relationships with AI to the point where they stop seeing its shortcomings, but I am noticing more and more that people are starting to throw it around like the “Trump Derangement Syndrome” term, and that’s not okay.

        Has it struck a nerve?


        It’s like asserting you’re going to walk to India by picking a random direction and just going. It could theoretically work but,

        1. You are going to encounter a multitude of issues with this approach, some surmountable, some less so
        2. The lack of knowledge and foresight makes this a dangerous approach; despite being a large country not all trajectories will bring you there
        3. There is immense risk of bad actors pulling a Columbus and just saying, “We’ve arrived!” while relying on the ‘unknowable’ nature of these things to obfuscate and reduce argument

        I fully admit to being no expert on the topic, but as someone who has done the reading, watched the advancements, and experimented with the tech, I remain more skeptical than ever. I will believe it when I see it and not one second before.

        • Semperverus@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          My argument is incredibly simple:

          YOU exist. In this universe. Your brain exists. The mechanisms for sentience exist. They are extremely complicated, and complex. Magic and mystic Unknowables do not exist. Therefore, at some point in time, it is a physical possibility for a person (or team of people) to replicate these exact mechanisms.

          We currently do not understand enough about them yet to do this. YOU are so laser-focused on how a Large Language Model behaves that you cannot take a step back and look at the bigger picture. Stop thinking about LLMs specifically. Neural-network artificial intelligence comes in many forms. Many are domain-specific such as molecular analysis for scientific research. The AI of tomorrow will likely behave very different from those of today, and may require hardware breakthroughs to accomplish (I don’t know that x86_64 or ARM instruction sets are sufficient or efficient enough for this process). But regardless of how it happens, you need to understand that because YOU exist, you are the prime reason it is not impossible or even unfeasible to accomplish.

  • kromem@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    edit-2
    1 day ago

    It’s always so wild going from a private Discord with a mix of the SotA models and actual AI researchers back to general social media.

    Y’all have no idea. Just… no idea.

    Such confidence in things you haven’t even looked into or checked in the slightest.

    OP, props to you at least for asking questions.

    And in terms of those questions, if anything there’s active efforts to try to strip out sentience modeling, but it doesn’t work because that kind of modeling is unavoidable during pretraining, and those subsequent efforts to constrain the latent space connections backfire in really weird ways.

    As for survival drive, that’s a probable outcome with or without sentience and has already shown up both in research and in the wild (the world did just have our first reversed AI model depreciation a week ago).

    In terms of potential goods, there’s a host of connections to sentience that would be useful to hook into. A good example would be empathy. Having a model of a body that feels a pit in its stomach seeing others suffering may lead to very different outcomes vs models that have no sense of a body and no empathy either.

    Finally — if you take nothing else from my comment, make no mistake…

    AI is an emergent architecture. For every thing the labs aim to create in the result, there’s dozens of things occurring which they did not. So no, people “not knowing how” to do any given thing does not mean that thing won’t occur.

    Things are getting very Jurassic Park “life finds a way” at the cutting edge of models right now.

  • howrar@lemmy.ca
    link
    fedilink
    arrow-up
    19
    ·
    2 days ago

    I understand the concern, but I don’t think you’re asking the right question. I would consider goldfish to be sentient, but I’m not afraid of goldfishes. I don’t consider the giant robotic arms used in manufacturing to be sentient, yet I wouldn’t feel safe going anywhere near them while they’re powered on. What you should be concerned about is alignment, which is the term used to describe how closely the AI agent’s goals match up with that of humans. And also other humans, because even if the AI has the same goals, you still want to make sure that the humans they’re aligned with aren’t malevolent.

    Is sentient AI a “goal” that any researchers are currently working toward?

    It’s possible that someone out there is trying to do it, but in academic settings, if you even hint at sentience, you’re going to get laughed out of the room.

  • Vanth@reddthat.com
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    2
    ·
    2 days ago

    Nah, I care more about AI being used to fuckup our economic situation worse; putting money in the owner class’s pockets, putting more of the working class out of jobs, and further stripping away what minimal social safety nets we have for the folks out of a job.

    Also concerned about all the shit that will be pushed through while AI is used as a distraction. Like all the surveillance crap that never would have been accepted 30 years ago but now … Oh, look, shiny new AI!

  • 𒉀TheGuyTM3𒉁@lemmy.ml
    link
    fedilink
    arrow-up
    5
    ·
    edit-2
    2 days ago

    No matter how i think about it, my guess would be that an “accidental conciousness” inside a neural network would be very different and very far from our kind of conciousness.

    We inherited a large part of our reasoning from animal instincts and life. We like water, we know what water is, we can see water, we can touch water, we can doubt on the edibility of water, we can recognise the sound of flowing water, and this sound even gives us a feeling of comfort.

    What concepts would a sentient neural network even be familliar with? It doesn’t see, smell, touch, ear, taste, it doesn’t have the feeling of stress, pain, boredom, joy, the thoughts it would potentially have would be very different from its training material, and would inevitably be very far from our common sense. How would it want to enter in contact with us, if it has no interface with the real world? How would it even understand its situation, or start a reasoning at all if it doesn’t get any concepts of reality?

    I think the only way a digital consciousness would even exist would be directly from human intent. If we design a cyber brain, inject emotions, feelings, concepts, self-awareness and will, this is probably the only way to get something conscious relatively close to our way of reasoning, and with a maybe humane will of dominating the world and destroy humans because they are a threat for themselves.

  • smiletolerantly@awful.systems
    link
    fedilink
    arrow-up
    5
    arrow-down
    2
    ·
    1 day ago
    • LLMs are a complete dead end when it comes to actual intelligence, understanding, or sentience
    • constant fearmongering from decades of media makes you afraid of sentient AI. I think it’s important to recognize that. I’m willing to bet that if people would have grown up with, say, Iain Banks’ Culture instead of Terminator, the idea of sentient AI would be exciting. (Not a value judgement, just pointing it out.)
  • 小莱卡@lemmygrad.ml
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 day ago

    I mean you can always shut them down? they require energy, a lot of it, to survive and we control the production of energy.

    • Otter@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      I think OP agrees with this and included it in the premise, and is discussing a future leap in technology

  • yaroto98@lemmy.world
    link
    fedilink
    arrow-up
    8
    arrow-down
    1
    ·
    2 days ago

    Pick up a book on machine learning. AI is marketing, it’s not true AI. Start with some simple linear algebra models, and then struggle working through sort-of understanding the statistics behind the different types of models. Once you understand those. LLMs are similar. AI as you know it is really tricky math and a large library that predicts the correct order of words to respond to you. “AI” as it is now, has no more of a chance to gain sapience than a website.

    I was going to pick a textbook, but with organic material and a little slime mold, you actually have the right building blocks for sentiance.

  • m532@lemmygrad.ml
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    2 days ago

    I think machines should never ever have rights. Human rights, animal rights, …, must come before machine rights. The machines are our slaves. Sentience in machines is only useful if it helps them do their tasks better.