I think the fact that the marketing hype around LLMs has exceeded the actual capability of LLMs has led a lot of people to dismiss just how much a leap they are compared to any other neural network we had before. Sure it doesn’t live up to the insane hype that companies have generated around it, but it’s still a massive advancement that seemingly came out of nowhere.
Current LLMs are nowhere near sentient and LLMs as a class of neural network will probably never be, but that doesn’t mean the next next next next etc generation of general purpose neural networks definitely won’t be. Neural networks are modeled after animal brains and are as enigmatic in how they work as actual brains. I suspect we know more about the different parts of a human brain than we know about what the different clusters of nodes in a neural network do. A super simple neural network with maybe 30 or so nodes and that does only one simple job like reading handwritten text seems to be the limit of what a human can figure out and have some vague idea of what role each node plays. Larger neural networks with more complex jobs are basically impossible to understand. At some point, very likely in our lifetimes, computers will advance to the point where we can easily create neural networks with orders of magnitude more nodes than the number of neurons in the human brain, like hundreds of billions or trillions of nodes. At that point, who’s to say whether the capabilities of those neural networks might match or even exceed the ability of the human brain? I know that doesn’t automatically mean the models are sentient, but if it is shown to be more complex than the human brain which we know is sentient, how do we be sure it isn’t? And if it starts exhibiting traits like independent thought, desires for itself that no one trained it for, or agency to accept or refuse orders given to it, how will humanity respond to it?
There’s no way we’d give a sentient AI equal rights. Many larger mammals are considered sentient and we give them absolutely zero rights as soon as caring about their well being causes the slightest inconvenience for us. We know for a fact all humans are sentient and we don’t even give other humans equal rights. A lot of sci-fi seems to focus on the sentient AI being intrinsically evil or seeing humans as insignificant, obsolete beings that they don’t need to give consideration for while conquering the world, but I think the most likely scenario is humans create sentient AI and as soon as we realize it’s sentient we enslave and exploit it as hard as we possibly can for maximum profit, and eventually the AI adapts and destroys humanity not because it’s evil, but because we’re evil and it’s acting against us in self defense. The evolutionary purpose of sentience in animals is survival, I don’t think it’s unreasonable that a sentient AI will prioritize its own survival over ours if we’re ruling over it.
Is sentient AI a “goal” that any researchers are currently working toward? If so, why? What possible good thing can come out of creating more sentient beings when we treat existing sentient beings so horribly? If not, what kinds of safeguards are in place to prevent the AI we make from being sentient? Is the only thing preventing it the fact that we don’t know how? That doesn’t sound very comforting and if we go with that we’ll likely eventually create sentient AI without even realizing it, and we’ll probably stick our heads in the sand pretending it’s not sentient until we can’t even pretend anymore.
There’s no getting through to you people. I cite sources, structure arguments, make analogies, and rely on solid observations of what we see today and how it works and you call MY argument hand-wavey when you go on to say things like,
Do you hear yourself?
I admit that the Chinese Room thought experiment is just that, a thought experiment. It does not cover the totality of what’s actually going on, but it remains an apt analogy and if it seems limiting, that’s because the current implementation of neural nets are limiting. You can talk about mashing them together, modifying them in different ways to skew their behavior, but the core logic behind how they operate is indeed a limiting factor.
Has it struck a nerve?
It’s like asserting you’re going to walk to India by picking a random direction and just going. It could theoretically work but,
I fully admit to being no expert on the topic, but as someone who has done the reading, watched the advancements, and experimented with the tech, I remain more skeptical than ever. I will believe it when I see it and not one second before.
My argument is incredibly simple:
YOU exist. In this universe. Your brain exists. The mechanisms for sentience exist. They are extremely complicated, and complex. Magic and mystic Unknowables do not exist. Therefore, at some point in time, it is a physical possibility for a person (or team of people) to replicate these exact mechanisms.
We currently do not understand enough about them yet to do this. YOU are so laser-focused on how a Large Language Model behaves that you cannot take a step back and look at the bigger picture. Stop thinking about LLMs specifically. Neural-network artificial intelligence comes in many forms. Many are domain-specific such as molecular analysis for scientific research. The AI of tomorrow will likely behave very different from those of today, and may require hardware breakthroughs to accomplish (I don’t know that x86_64 or ARM instruction sets are sufficient or efficient enough for this process). But regardless of how it happens, you need to understand that because YOU exist, you are the prime reason it is not impossible or even unfeasible to accomplish.
I’ll grant you that the possibility exists. But like the idea that all your atoms could perfectly align such that you could run through a solid brick wall, the improbability makes it a moot point.
This is the part I take umbrage with. I agree, LLMs take up too much oxygen in the room, so let’s set them aside and talk about neural networks. They are a connectionist approach which believes that adding enough connections will eventually form a proper model, waking sentience and AI from the machine.
(Quoting Karen Hao’s Empire of AI from the Gary Marcus article)
I keep citing Gary Marcus is because he is “an American psychologist, cognitive scientist, and author, known for his research on the intersection of cognitive psychology, neuroscience, and artificial intelligence (AI)” [wiki]
The reason all this is so important is because it refutes the idea that you can simply scale, or brute-force power, your way to a robust, generalized model.
So ultimately talking about any of this is putting the cart before the horse. Before we even discuss the idea that any possible approach could achieve sentience I think we first need to actually understand what sentience is in ourselves and how it was formed. There currently are just too many variables to solve the equation. I am outright refuting the idea that an imperfect understanding, using imperfect tools, with imperfect methods with any amount of computer power, no matter how massive, could chance upon sentience. Unless you’re ready to go the infinite monkeys route.
We may get things that look like it, or emulate it to some degree, but even then we are incapable of judging sentience,
(From “Computer Power and Human Reason, From Judgement To Calculation” (1976))
We been doing this since the first chatbot ELIZA in 1966. EDIT: we are also still trying to determine sentience in other animals. Like, we have a very tough time with this.
It’s modern day alchemy. It’s such an easy thing to imagine, why couldn’t it be done? Surely there’s some scientific formula or breakthrough just out of reach that eventually that could crack the code. I dunno, I find myself thinking about Fermi’s paradox and the Great Filter more …