I think the fact that the marketing hype around LLMs has exceeded the actual capability of LLMs has led a lot of people to dismiss just how much a leap they are compared to any other neural network we had before. Sure it doesn’t live up to the insane hype that companies have generated around it, but it’s still a massive advancement that seemingly came out of nowhere.

Current LLMs are nowhere near sentient and LLMs as a class of neural network will probably never be, but that doesn’t mean the next next next next etc generation of general purpose neural networks definitely won’t be. Neural networks are modeled after animal brains and are as enigmatic in how they work as actual brains. I suspect we know more about the different parts of a human brain than we know about what the different clusters of nodes in a neural network do. A super simple neural network with maybe 30 or so nodes and that does only one simple job like reading handwritten text seems to be the limit of what a human can figure out and have some vague idea of what role each node plays. Larger neural networks with more complex jobs are basically impossible to understand. At some point, very likely in our lifetimes, computers will advance to the point where we can easily create neural networks with orders of magnitude more nodes than the number of neurons in the human brain, like hundreds of billions or trillions of nodes. At that point, who’s to say whether the capabilities of those neural networks might match or even exceed the ability of the human brain? I know that doesn’t automatically mean the models are sentient, but if it is shown to be more complex than the human brain which we know is sentient, how do we be sure it isn’t? And if it starts exhibiting traits like independent thought, desires for itself that no one trained it for, or agency to accept or refuse orders given to it, how will humanity respond to it?

There’s no way we’d give a sentient AI equal rights. Many larger mammals are considered sentient and we give them absolutely zero rights as soon as caring about their well being causes the slightest inconvenience for us. We know for a fact all humans are sentient and we don’t even give other humans equal rights. A lot of sci-fi seems to focus on the sentient AI being intrinsically evil or seeing humans as insignificant, obsolete beings that they don’t need to give consideration for while conquering the world, but I think the most likely scenario is humans create sentient AI and as soon as we realize it’s sentient we enslave and exploit it as hard as we possibly can for maximum profit, and eventually the AI adapts and destroys humanity not because it’s evil, but because we’re evil and it’s acting against us in self defense. The evolutionary purpose of sentience in animals is survival, I don’t think it’s unreasonable that a sentient AI will prioritize its own survival over ours if we’re ruling over it.

Is sentient AI a “goal” that any researchers are currently working toward? If so, why? What possible good thing can come out of creating more sentient beings when we treat existing sentient beings so horribly? If not, what kinds of safeguards are in place to prevent the AI we make from being sentient? Is the only thing preventing it the fact that we don’t know how? That doesn’t sound very comforting and if we go with that we’ll likely eventually create sentient AI without even realizing it, and we’ll probably stick our heads in the sand pretending it’s not sentient until we can’t even pretend anymore.

  • audaxdreik@pawb.social
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    3 hours ago

    YOU exist. In this universe. Your brain exists. The mechanisms for sentience exist. They are extremely complicated, and complex. Magic and mystic Unknowables do not exist.

    I’ll grant you that the possibility exists. But like the idea that all your atoms could perfectly align such that you could run through a solid brick wall, the improbability makes it a moot point.

    Therefore, at some point in time, it is a physical possibility for a person (or team of people) to replicate these exact mechanisms.

    This is the part I take umbrage with. I agree, LLMs take up too much oxygen in the room, so let’s set them aside and talk about neural networks. They are a connectionist approach which believes that adding enough connections will eventually form a proper model, waking sentience and AI from the machine.

    Hinton and Sutskever continued [after their seminal 2012 article on deep learning] to staunchly champion deep learning. Its flaws, they argued, are not inherent to the approach itself. Rather they are the artifacts of imperfect neural-network design as well as limited training data and compute. Some day with enough of both, fed into even better neural networks, deep learning models should be able to completely shed the aforementioned problems. “The human brain has about 100 trillion parameters, or synapses,” Hinton told me in 2020.

    "What we now call a really big model, like GPT-3, has 175 billion. It’s a thousand times smaller than the brain.

    “Deep learning is going to be able to do everything,” he said.

    (Quoting Karen Hao’s Empire of AI from the Gary Marcus article)

    I keep citing Gary Marcus is because he is “an American psychologist, cognitive scientist, and author, known for his research on the intersection of cognitive psychology, neuroscience, and artificial intelligence (AI)” [wiki]

    The reason all this is so important is because it refutes the idea that you can simply scale, or brute-force power, your way to a robust, generalized model.

    If we could have only three knowledge frameworks, we would lean heavily on topics that were central to Kant’s Critique of Pure Reason, which argued, on philosophical grounds, that time, space, and causality are fundamental.

    Putting these on computationally solid ground is vital for moving forward.


    So ultimately talking about any of this is putting the cart before the horse. Before we even discuss the idea that any possible approach could achieve sentience I think we first need to actually understand what sentience is in ourselves and how it was formed. There currently are just too many variables to solve the equation. I am outright refuting the idea that an imperfect understanding, using imperfect tools, with imperfect methods with any amount of computer power, no matter how massive, could chance upon sentience. Unless you’re ready to go the infinite monkeys route.

    We may get things that look like it, or emulate it to some degree, but even then we are incapable of judging sentience,

    (From “Computer Power and Human Reason, From Judgement To Calculation” (1976))

    This phenomenon is comparable to the conviction many people have that fortune-tellers really do have some deep insight, that they do “know things,” and so on. This belief is not a conclusion reached after a careful weighing of evidence. It is, rather a hypothesis which, in the minds of those who hold it, is confirmed by the fortune-teller’s pronouncements. As such, it serves the function of the drunkard’s lamppost we discussed earlier: no light is permitted to be shed on any evidence that might disconfirming and, indeed, anything that might be seen as such evidence by a disinterested observer is interpreted in a way that elaborates and fortifies the hypothesis.

    It is then easy to understand why people are conversing with ELIZA believe, and cling to the belief, that they are being understood. The “sense” and the continuity the person conversing with ELIZA perceives is supplied largely by the person himself. He assigns meanings and interpretations to what ELIZA “says” that confirm his initial hypothesis that the system does understand, just as he might do with what a fortune teller says to him.

    We been doing this since the first chatbot ELIZA in 1966. EDIT: we are also still trying to determine sentience in other animals. Like, we have a very tough time with this.

    It’s modern day alchemy. It’s such an easy thing to imagine, why couldn’t it be done? Surely there’s some scientific formula or breakthrough just out of reach that eventually that could crack the code. I dunno, I find myself thinking about Fermi’s paradox and the Great Filter more …