LLMs Anthropomorphism: AI Risks, Ethics and Regulation

LLMs Anthropomorphism: AI Risks, Ethics and Regulation

LLMs Anthropomorphism: AI Risks, Ethics and Regulation

There is a recent article, Is AI Our Most Dangerous Rival? stating that, "Sentience is understood to be the capacity to feel and register experiences and feelings. AI only becomes sentient when it has the empirical intelligence to think, feel, and perceive the physical world around it just as humans do. AI sentience would be a Pinocchio moment—life suddenly appearing in an inanimate object. It sounds daunting, but if it ever came to be, would it be more or less of a threat? Why would it respond to human beings? Its refusal would distance us from our creation and make empathising to understand its intentions even more challenging. But would it have intentions? Why would it do anything?"

"We are more likely to make errors in communication and our attempts to empathise due to our anthropomorphism of AI. This is enhanced when we see two arms, two legs, and a face on an AI robot. We assume it is an emotional being like us, not a logical entity like our calculator. If we try to empathise with a calculator, we try to perceive its logic, not its emotions. We have no experience of being driven by logic alone, which inhibits our understanding of entities that are."

There are two key questions that should be central in exploring the risks of artificial intelligence: What does AI [seem to] know, or what can artificial intelligence know, then what can it do, or what might it be able to do with what it knows? The questions of sentience and anthropomorphism are vital but less important when compared with knowing and doing.

Humans, in summary, know and do. Doing is a subset of knowing, as knowing is central to existence, beyond the possession of a body, or set up by biology, with most parts remaining in the immediate aftermath of death, while the ability to know vanishes.

Already, LLMs are capable of carrying out several tasks accurately. They can interact or simulate what can look like human-to-human interaction. They are also able to give some details that can enable a user to do things or know about alternatives.

They can also be equipped with the capability to do stuff from what they know, or they can be aided to know, then to do. The knowing and doing of AI could be the key to safety and for an ethical approach, beyond evaluation of sentience and anthropomorphism.

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • No comments found

Share this article

Stephen David

Research in Theoretical Neuroscience
 
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline