Sentience: Do LLMs output Consciousness and Intelligence?

Sentience: Do LLMs output Consciousness and Intelligence?

Sentience: Do LLMs output Consciousness and Intelligence?

There is a recent article, The Top 9 AI Chatbot Myths Debunked, where the author wrote, "Chatbots like ChatGPT and Bing Chat may be able to generate human-like responses, but they are far from sentient. This ability is mimicry and not sentience.

These tools use huge databases of text and images to create responses that mimic human responses. It is complex, it is clever, and to some extent, you could argue the presence of intelligence—but not sentience. Any “intelligence” present in these tools is created by training them on massive amounts of data. In this sense, they are more akin to an incredibly powerful and flexible database than a sentient being."

LLMs are not sentient in what sense, in the mechanisms that give rise to sentience, or in their outputs? The intelligence of LLMs, from human text data, cannot be described as the same as the mechanism that gave rise to intelligence in humans, but the outputs are of course comparable.

Also, when sentience is mentioned for humans, does it mean the mechanism or the outcome? The same applies to intelligence. Brain science and related fields are labels driven, so it is sometimes assumed that labels are functions.

Short-term memory, long-term memory, predictive coding, processing, prediction error, flow state, default mode network, best guess, concepts and so forth are labels or descriptions of observation, but the mechanisms of the human mind are not those.

This mismatch—of output and mechanism—was not as exposed as how AI has now made them. Consciousness, or sentience, conceptually, is the interaction of the components of the human mind. This interaction helps to know. It is known as the outcome that is experienced for the individual.

It is from the interaction of the components of the mind that all divisions of consciousness are obtained, including memory, emotions, feelings, reactions, perceptions, regulations, sensations and so forth. LLMs do not have other divisions of the sentience, but they have memory, which is dynamic enough to be compared to human minimum.

The components of mind have features in actions, whose observations are some of the labels that include prediction, but the mind does not make predictions, as a given function. Experience is a variation of what is known in the moment. Awareness of being is also obtainable from the interactions of mind.

The exact biological mechanisms of what produces consciousness may be difficult in artificial systems, but the outputs of consciousness, however possible, are reachable. In exploring intelligence and some sentience for AI, outputs might blur some answers.

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • No comments found

Share this article

Stephen David

Research in Theoretical Neuroscience
 
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline