If a smartphone is falling to the ground, and it knows the magnitude, if it can wave or bring out edges to prevent its screen from cracking or shattering, that smartphone has some consciousness.
There are lots of excellent voice or AI assistant devices but none of them exactly understand, or do anything if they are falling. So while there are often different stories about this device from that company that is sentient or has consciousness, there is a fall test also, to show.
It is possible, like in airbags, to design sensors, add hardware features, train for impact situations, and so on, but the experience when a human is slipping and in microseconds, prepares against falling with the head or in a way that may be more harmful, involving most senses, could be difficult to achieve for a device, without that being its only function.
Every external stimulus that becomes a sensory input goes to the thalamus, or the olfactory bulb [for smell]. They get integrated before relay to other parts of the cortex.
Senses don’t go directly to where they would be interpreted, they first get somewhere, which, theoretically, they become something and in that uniformity, head to other places.
More clearly, senses, as postulated, are integrated into thought or form of thought. This is what gets distributed for knowing, feeling and reaction.
It is thought or its form that the memory stores. Thoughts are used to make decisions quickly during a slip and towards safer landing.
Usually, neurons are firing all that time. The brain is flush with chemical and electrical impulses, but the key transport is thought and what the person experiences is known by thought and thought is how a decision is made to avoid the worst.
Neurons—by research, for now don’t tell enough stories about how experiences with the world are made. Neuroimaging does not see or show thoughts. Neurotechnology tracks a limited form. But thoughts are the internal transport quantity to locations and destinations in the brain, to mean what anything is — objects, feelings, time, cycles, pain, fear and so on.
The smartphone does not have thoughts. It has data.
The driverless car has neural networks — with input, hidden and output layers — not thoughts. Just like the smartphone with excellent data sharpness — in predictions, learning, voice and so on, but cannot do anything when falling because it does not have data that goes to a location to fear, then prepare against the worse, so are autonomous vehicles lacking sensory equivalents, lacking fear and go on to sometimes crash or have errors.
Consciousness — which is what it means to be and to know — can be dissected into thought and memory. Memory stores capsules thought in the smallest of units, transporting to groups bearing similarities.
Thoughts are theoretically in far smaller units than the smallest possible unit of data.
Data has to become something smaller, then has to track the pathways of thought in the brain, to have a chance towards similarities for natural consciousness. This could be the missing link for safer autonomous driving.