There is a recent article, Is AI Our Most Dangerous Rival? stating that, "Sentience is understood to be the capacity to feel and register experiences and feelings. AI only becomes sentient when it has the empirical intelligence to think, feel, and perceive the physical world around it just as humans do. AI sentience would be a Pinocchio moment—life suddenly appearing in an inanimate object. It sounds daunting, but if it ever came to be, would it be more or less of a threat? Why would it respond to human beings? Its refusal would distance us from our creation and make empathising to understand its intentions even more challenging. But would it have intentions? Why would it do anything?"
There is a new book, The Consciousness Revolutions: From Amoeba Awareness to Human Emancipation, whose author, in an interview said:
There is a recent article, This Is What Neuroscientists and Philosophers Understand About Addiction, stating that, "claims that people with addiction are unable to control themselves are belied by basic facts. However, those who contend that substance use disorder is just a series of self-centered decisions face conflicting evidence, too. Brains can be seen as prediction engines, constantly calculating what is most likely to happen next and whether it will be beneficial or harmful." The author of the article assets that during addiction, he contends, "despairing thoughts about oneself and the future — not just thoughts about how good the drug is — predominate. At the same time, thoughts about negative consequences of use are minimized, as are those about alternative ways of coping. Drugs are overvalued as a way to mitigate distress; everything else is undervalued. To recover, people with addiction need both new skills and an environment that provides better alternatives. This doesn’t mean rewarding people for bad behavior." The article did not mention the mind. It mentions the brain a few times, but does anyone use drugs for the brain? Is the reason for addiction some brain tissue? The article mentions liking, wanting, pleasure, mental illness, traumatic childhood, psychiatric disorders, emotions and so forth, but are experiences of pleasure, wanting, liking, trauma, the brain or the mind? It says the brain is a prediction engine, but what exactly in the brain is predicting, is it the Acom predicting to go left rather than right, or a sulcus seeking to take the place of a gyrus because the gyrus has a better view, or the hypothalamus predicting what position it can take for a chance to invade the thalamus, so that it becomes more prominent? The brain, as a biological organ, does not make predictions. It may be explained that the brain nominally means the mind or that they are interchangeable, but the mind too does not make predictions. It has functions that correspond with what is observed as predictions, but the mind does not make predictions. People take drugs for the mind. Drugs may enable experiences that make them applicable. All experiences are in the mind. Answering some questions on drug relevance or addiction should be linked directly with the mind. This applies to social media. It also applies to generative AI. Some aspects of social media went awry because there was no caution or safety related to the human mind on what it might do, or that whatever effect it is having, the way it is doing so, displayed, even conceptually. The human mind has a structure, it has functions and components. The brain and the mind are within the cranium, but are not the same. There are parts of the brain where the mind is given off, or where it can be said that they meet, but all thoughts, feelings, memory, emotions and reactions are the mind. Memory includes modulation, regulation or control of internal senses, since their limits and extents are known. Cells and molecules of the brain organize the components of the mind for functions. For AI ethics, how does the human mind factor? For LLMs alignment, how should it be ensured that they remain within safe areas in affect for the mind? Why does a picture have an effect on someone? How does the mind generally spot fake? What is the difference between what is labelled optical illusion and falling for a fake? Is a property acquired in the mind, in a moment, that becomes what interprets an image, without enough time to acquire other properties, or to spot weaknesses? In shaping generative AI for improved safety, it may be useful to accompany a conceptual model of how the mind works, with LLM products and services, to ensure that their risks to society, in some aspects, are minimized.
Artificial Intelligence (AI) has made tremendous progress in recent years, with large language models (LLMs) being at the forefront of this development.