published on in Latest Trends

No, Todays AI Isnt Sentient. Heres How We Know

Artificial general intelligence (AGI) is the term used to describe an artificial agent that is at least as intelligent as a human in all the many ways a human displays (or can display) intelligence. It’s what we used to call artificial intelligence, until we started creating programs and devices that were undeniably “intelligent,” but in limited domains—playing chess, translating language, vacuuming our living rooms.

The felt need to add the “G” came from the proliferation of systems powered by AI, but focused on a single or very small number of tasks. Deep Blue, IBM’s impressive early chess playing program, could beat world champion Garry Kasparov, but would not have the sense to stop playing if the room burst into flames.

Now, general intelligence is a bit of a myth, at least if we flatter ourselves that we have it. We can find plenty of examples of intelligent behavior in the animal world that achieve results far better than we could achieve on similar tasks. Our intelligence is not fully general, but general enough to get done what we want to get done in most environments we find ourselves in. If we’re hungry, we can hunt a mastodon or find a local Kroger’s; when the room catches on fire, we look for the exit.

One of the essential characteristics of general intelligence is “sentience,” the ability to have subjective experiences—to feel what it’s like, say, to experience hunger, to taste an apple, or to see red. Sentience is a crucial step on the road to general intelligence.

With the release of ChatGPT in November 2022, the era of large language models (LLMs) began. This instantly sparked a vigorous debate about whether these algorithms might in fact be sentient. The implications of the possible sentience of LLM-based AI has not only set off a media frenzy, but also profoundly impacted some of the world-wide policy efforts to regulate AI. The most prominent position is that the emergence of “sentient AI” could be extremely dangerous for human-kind, possibly bringing about an “extinction-level” or “existential” crisis. After all, a sentient AI might develop its own hopes and desires, with no guarantee they wouldn’t clash with ours.

This short piece started as a WhatsApp group chat to debunk the argument that LLMs might have achieved sentience. It is not meant to be complete or comprehensive. Our main point here is to argue against the most common defense offered by the “sentient AI” camp, which rests on LLMs’ ability to report having “subjective experiences.”

Why some people believe AI has achieved sentience

Over the past months, both of us have had robust debates and conversations with many colleagues in the field of AI, including some deep one-on-one conversations with some of the most prominent and pioneering AI scientists. The topic of whether AI has achieved sentience has been a prominent one. A small number of them believe strongly that it has. Here is the gist of their arguments by one of the most vocal proponents, quite representative of those in the “sentient AI” camp:

Why they’re wrong

While this sounds plausible at first glance, the argument is wrong. It is wrong because our evidence is not exactly the same in both cases. Not even close.

When I conclude that you are experiencing hunger when you say “I’m hungry,” my conclusion is based on a large cluster of circumstances. First, is your report—the words that you speak—and perhaps some other behavioral evidence, like the grumbling in your stomach. Second, is the absence of contravening evidence, as there might be if you had just finished a five-course meal. Finally, and this is most important, is the fact that you have a physical body like mine, one that periodically needs food and drink, that gets cold when it’s cold and hot when it’s hot, and so forth.

Now compare this to our evidence about an LLM. The only thing that is common is the report, the fact that the LLM can produce the string of syllables “I’m hungry.” But there the similarity ends. Indeed, the LLM doesn’t have a body and so is not even the kind of thing that can be hungry.

If the LLM were to say, “I have a sharp pain in my left big toe,” would we conclude that it had a sharp pain in its left big toe? Of course not, it doesn’t have a left big toe! Just so, when it says that it is hungry, we can in fact be certain that it is not, since it doesn’t have the kind of physiology required for hunger.

When humans experience hunger, they are sensing a collection of physiological states—low blood sugar, empty grumbling stomach, and so forth—that an LLM simply doesn’t have, any more than it has a mouth to put food in and a stomach to digest it. The idea that we should take it at its word when it says it is hungry is like saying we should take it at its word if it says it’s speaking to us from the dark side of the moon. We know it’s not, and the LLM’s assertion to the contrary does not change that fact.

All sensations—hunger, feeling pain, seeing red, falling in love—are the result of physiological states that an LLM simply doesn’t have. Consequently we know that an LLM cannot have subjective experiences of those states. In other words, it cannot be sentient.

An LLM is a mathematical model coded on silicon chips. It is not an embodied being like humans. It does not have a “life” that needs to eat, drink, reproduce, experience emotion, get sick, and eventually die.

It is important to understand the profound difference between how humans generate sequences of words and how an LLM generates those same sequences. When I say “I am hungry,” I am reporting on my sensed physiological states. When an LLM generates the sequence “I am hungry,” it is simply generating the most probable completion of the sequence of words in its current prompt. It is doing exactly the same thing as when, with a different prompt, it generates “I am not hungry,” or with yet another prompt, “The moon is made of green cheese.” None of these are reports of its (nonexistent) physiological states. They are simply probabilistic completions.

We have not achieved sentient AI, and larger language models won’t get us there. We need a better understanding of how sentience emerges in embodied, biological systems if we want to recreate this phenomenon in AI systems. We are not going to stumble on sentience with the next iteration of ChatGPT.

Li and Etchemendy are co-founders of the Institute for Human-Centered Artificial Intelligence at Stanford University. Li is a professor of Computer Science, author of ‘The Worlds I See,’ and 2023 TIME100 AI honoree. Etchemendy is a professor of Philosophy and former provost of Stanford.

More Must-Reads from TIME

Contact us at letters@time.com.

ncG1vNJzZmismaKyb6%2FOpmacp5yhsqTAyKilaKyZorJyfI9mraihk5rAcIKYcWdqa2Rkrqp5y6WkZqafqXq0sc2toJ6mpGQ%3D