LLMs: Chinese Room Argument, AI Intelligence and Sentience

LLMs: Chinese Room Argument, AI Intelligence and Sentience

LLMs: Chinese Room Argument, AI Intelligence and Sentience

There is a 1980 paper, "Mind, Brains and Programs", describing what is known as the Chinese Room Argument where the author wrote, "Suppose that I'm locked in a room and given a large batch of Chinese writing. Suppose furthermore (as is indeed the case) that I know no Chinese, either written or spoken, and that I'm not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles."

"Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that 'formal' means here is that I can identify the symbols entirely by their shapes"

"As far as the Chinese are concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program." 


LLMs have put a spotlight on the question of intelligence, more than at any time in recent history. The most common argument is that AI knows nothing. Those who say so take from the view in brain science, where labels are interchanged with brain functions. For example, short-term memory as a function or emotions or predictions.

The question is, does the brain have a short-term memory function or is it just the observation of an output? If processes are uniform, what really matters, is it that the brain has the mechanism, or that the outputs are there or usable?

The human mind is responsible for labels like thoughts, feelings, emotions, sensations, perceptions, modulation, memory, consciousness and so forth. The mind as a complex has its components, their interactions and features. It is the interactions of the components of mind that produce those outputs. Conceptually, there is no separate mechanism for sentience from intelligence, they are all produced similarly by the mind.

Centers and circuits may differ, intensity, degrees, splits, prioritization, sequences, shapes, spots and so forth may be different, but all the labels of the mind align with outputs, not mechanisms of the mind.

So, if the outputs are labeled and described, and something else is able to replicate those outputs, does it have it or not? If the same mechanisms are present in a coma, during deep sleep or under general anesthesia, but the degrees or outputs are not available, does it mean the person is not intelligent or not conscious [a label]?

Aside, intelligence, people in relationships often seek ways to find out if it is real, even when everything that should mean that it is real is presented. This says that even with individuals, outputs can be replicated, but some of the features or interactions of components for certain subjective experiences could be missing.

It should be evident that if the output matches, there is a degree of the form, even if the mechanism is not there or different. What is called intelligence is a output of the human mind, whose availability is obvious to others. There are advanced forms of it, which are harder than say feelings like pleasure.

Conceptually, the human mind is the collection of all the electrical and chemical impulses of nerve cells and their interactions. It is how subjective experiences are produced as well as the memories of it. The mind does one principal thing, to know. Knowing is the conclusion of everything the mind does, for internal and external senses.

Properties available to be acquired across destinations of mind are to degrees. Properties are capped as well as the degrees, even for organisms that have. So, since outputs are based on those, they can be estimated, even for those that have less.

The author has a degree of what it means to know Mandarin, even if the mechanism is not obvious to others or not natural, so does AI for intelligence it copied from human text data. If intelligence is a sub-division of sentience, then as an output, so do LLMs have a minute version.

Share this article

Leave your comments

Post comment as a guest

terms and condition.
  • No comments found

Share this article

Stephen David

Research in Theoretical Neuroscience
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics