Real Artificial Intelligence vs. Fake Artificial Intelligence

Real Artificial Intelligence vs. Fake Artificial Intelligence

Real Artificial Intelligence vs. Fake Artificial Intelligence

Artificial intelligence (AI) remains a loosely-defined and often misunderstood term.

So what's the difference between real AI and fake AI?

We, humans, are full of biases and prejudices, like the so-called anthropomorphic mentality: “the attribution of distinctively human-like feelings, mental states, and behavioral characteristics to inanimate objects, animals, and in general to natural phenomena and supernatural entities”.

We like everything around us to be like us, anthropomorphizing religious figures, animals, the environment, and technological artifacts (from computational artifacts to robots), including AI, which is developed as replicating humans in the body, mind and behavior.

The human body, brain, intelligence, mind and behavior, all are the (privileged) sources of inspiration for AI, both as models to emulate and goals to achieve.

As a result, a few categories of human-centric, human-replicating AI, simulating or faking human brains, cognitive function, behavioral characteristics or human bodies, are prevalent:

  • AI which can replicate biological neural networks, as deep machine learning (DL), or deep neural networks (DNNs);
  • AI which can think or act like humans do;
  • AI in terms of an essential rationality or goal-directed behavior, i.e., thinking or acting rationally (Russell and Norvig).

Thus, the human body, brain and behavior are projected on human-like AI models, algorithms and applications, or robots, with all the consequences, like a highly probable future of "Extinction", synthetics vs. humans.

It is clear and plain that such an AI/ML/DL is the Highway Road to Technological Unemployment and Omnicide (Anthropogenic Human Extinction), the termination of Homo Sapiens as a species. 

Of all possible scenarios of omnicide as climate change, global nuclear annihilation, biological warfare, ecological collapse, and emerging technologies, as biotechnology or self-replicating nanobots, the most real one is a human-replicating machine intelligence and learning (MIL) and cognitive technologies imitating cognitive functions/skills, capacities/capabilities.

It is not like "existential risk from artificial general intelligence hypothesis". The human-replicating machine intelligence and learning (MIL) and cognitive technologies as a Fake, Unreal AI is superseding, supplanting, surpassing and subverting humanity, developing superhuman performance in all human areas. The AlphaZero algorithm has demonstrated in the domain of strategic games that the fake AI systems can fast progress from narrow human-level ability to narrow superhuman ability becoming "superhuman" in all parts and sides of human cognition and behavior. 

The list of those who have pointed to the risks of AI numbers such as Alan Turing, Norbert Wiener, I.J. Good, Marvin Minsky, Elon Musk, Professor Stephen Hawking and even Microsoft co-founder Bill Gates.

“Mark my words, AI is far more dangerous than nukes.” With artificial intelligence we are summoning the demon.Tesla and SpaceX founder Elon Musk 

“Unless we learn how to prepare for, and avoid, the potential risks, “AI could be the worst event in the history of our civilization.” “The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever increasing rate." "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded." the late physicist Stephen Hawking

“Computers are going to take over from humans, no question...Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on?” Apple co-founder Steve Wozniak

“I am in the camp that is concerned about super intelligence,” Bill Gates

AI has the ability to “circulate tendentious opinions and false data that could poison public debates and even manipulate the opinions of millions of people, to the point of endangering the very institutions that guarantee peaceful civil coexistence.” Pope Francis, “The Common Good in the Digital Age”

Real AI vs. Fake AI

AI is powering a dramatic change in every parts of human life, in every society, economy and industry across the globe, emerging as a strategic general purpose technology.

Then it is most significant to know what intelligence is in general, with all its major kinds, technologies, systems and applications, and how it is all interrelated with reality and causality.

What is real intelligence?

It is the topmost true intelligence dealing with reality in terms of the world models and data/information/knowledge representations for cognition and reasoning, understanding and learning, problem-solving, predictions and decision-making, and interacting with the environment.

An Intelligence is any entity which is modeling and simulating the world to effectively and sustainably interact with any environments, physical, natural, mental, social, digital or virtual. This is a common definition covering any intelligent systems of any complexity, human, machine, or alien intelligences.

What is Artificial Intelligence?

It is a man-made anthropomorphic intelligence, developed as simple rule-based systems, symbolic logical general AI and/or sub-symbolic statistical narrow AI. Such an AI is an automation technology that emulates human performance, typically by learning from it, to enhance existing applications and processes.

In AI, it is popular the concept of an intelligent agent (IA), which refers to an autonomous entity which acts, directing its activity towards achieving goals (i.e. it is an agent), upon an environment using observation through sensors and consequent actuators (i.e. it is intelligent). Intelligent agents may also learn or use knowledge to achieve their goals. They may be very simple or very complex. A reflex machine, such as a thermostat, is considered an example of an intelligent agent.

It is misleadingly considered that a goal-directed behavior is to be the essence of intelligence, formalized as an "objective function", negative or positive, a "reward function", a "fitness function", a "loss function", a "cost function", a "profit function", a "utility function", all mapping an event [or values of one or more variables] onto a real number intuitively representing some "cost" or "loss" or "reward".

What are Machine Learning, Deep Learning and Artificial Neural Networks?

Machine learning (ML) is a type of artificial intelligence (AI), viewed as a tabula rasa, having no innate knowledge, data patterns, rules or concepts, and which algorithms use historical data as input to predict new output values.

Recommendation engines, fraud detection, spam filtering, malware threat detection, business process automation (BPA), predictive maintenance, customer behavior analytics, and business operational patterns are common use cases for ML.

Many of today's leading companies, such as Apple, Microsoft, Facebook, Google, Amazon, Netflix, Uber, Alibaba, Baidu, Tencent, etc., make Deep ML a central part of their operations.

Deep learning is a type of machine learning and artificial intelligence that imitates the human brain. Deep learning is a core of data science, which includes statistics and predictive modeling. At its simplest, deep learning is a way to automate predictive analytics, which algorithms are stacked in a hierarchy of increasing complexity and abstraction. Unable to understand the concept of things, as feature sets, a computer program that uses DL algorithms should be shown a training set and sort through millions of videos, audios, words or images, patterns of data/pixels in the digital data, to label/classify the things.

Deep learning neural networks are an advanced machine learning algorithm, known as an artificial neural network, underpinning most deep learning models, also referred to as deep neural learning or deep neural networking.

DNNs come in many architectures, recurrent neural networks, convolutional neural networks, artificial neural networks and feedforward neural networks, etc., each has benefits for specific use cases. All function in similar ways -- by feeding data in and letting the model figure out for itself whether it has made the right interpretation or decision about given data sets. Neural networks involve a trial-and-error process, so they need massive amounts of data on which to train, but deep learning models can't train on unstructured data. Most of the data humans and machines create, 80% is unstructured and unlabeled.

Today's AAI is over-focused on specific behavioral traits [BEHAVIORISM] or cognitive functions/skills/capacities/capabilities [MENTALISM], specific problem-solving in specific domains and there is still a very long way for such an AI to go in the future.

The key problem with AAI is “Will It Ever Be Possible to Create a General AI with Sentience and Conscience and Consciousness?’.

And its methodological mistake that AI starts blank, acquiring knowledge as the outside world is impressed upon it. What is known as the tabula rasa theory that any intelligence, natural or artificial, born or created "without built-in mental content, and therefore all knowledge about reality comes from experience or perception" (Aristotle, Zeno of Citium, Avicenna, Lock, psychologists and neurobiologists, and AAI/ML/DNN researchers).

Ibn Sina argued that the human intellect at birth resembled a tabula rasa, a pure potentiality that is actualized through education and comes to know". In Locke's philosophy, at birth the (human) mind is a "blank slate" without rules for processing data, and that data is added and rules for processing are formed solely by one's sensory experiences.

It was presented as "A New Direction in AI. Toward a Computational Theory of Perceptions" [Zadeh (2001)]. Here "at the moment when the senses of the outside world of AI-system begin to function, the world of physical objects and phenomena begins to exist for this intelligent system. However, at the same moment of time the world of mental objects and events of this AI-system is empty. This may be considered as one of the tenets of the model of perception. At the moment of the opening of perception channels the AI-system undergoes a kind of information blow. Sensors start sending the stream of data into the system, and this data has not any mapping in the personal world of mental objects which the AI-system has at this time". Subjective Reality and Strong Artificial Intelligence

Here is the mother of all mistakes: "Sensors start sending the stream of data into the system, and this data has ALL mapping in the personal world of mental objects which the AI-system has at this time".

Or, at very birth/creation the (human) mind or AI has the prior/innate/encoded/programmed knowledge of reality with causative rules for processing world's data, and an endless new stream of data is added to be processed by innate master algorithms.

In fact, the tabula rasa model of AI has been refuted by some successful ML game-playing systems as AlphaZero, a computer program developed by DeepMind to master the games of chess, shogi and go. It achieved superhuman performance in various board games using self-play and tabula rasa reinforcement learning, meaning it had no access to human games or hard-coded human knowledge about either game, BUT being given the rules of the games.

Again, reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. With all varieties, associative, deep, safe, or inverse, it differs from supervised learning in not needing labelled input/output pairs be presented, and in not needing sub-optimal actions to be explicitly corrected, focusing on finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).

All in all, the real AI model covers abstract intelligent agents (AIA) paradigm with thetabula rasa model of AI, with all their real world implementations as computer systems, biological systems, human minds, superminds [organizations], or autonomous intelligent agents. 

In this rational-action-goal paradigm, an IA possesses an internal "model" of its environment, encapsulating all the agent's beliefs about the world.

What is Natural Intelligence?

It is the intelligence created by nature, natural evolutionary mechanisms, as biological intelligence embodied as the brain, animal and human and any hypothetical alien intelligence.

What is real intelligence? What is natural intelligence and artificial intelligence and how are they different from each other?

So, what's the difference between real and genuine AI and fake and fictitious AI?

When we all hear about artificial intelligence, the first thing that pops up is human-like machines and robots, from Frankenstein's creature to Skynet's synthetic intelligence, that wreak havoc on humans and Earth. And many people still think it as a dystopian sci-fi far away from the truth.

Humanity has two polar future worlds to be taken over by two polar AIs.

The first, mainstream paradigm is a AAI (Artificial, Anthropic, Applied, Automated, Weak and Narrow "Black Box" AI), which is promoted by the big tech IT companies and an army of AAI researchers and scientists, developers and technologists, businessmen and investors, as well as all sorts of politicians and think tankers. Such an AI is based on the principle that human intelligence can be defined in a way that a machine can easily mimic it and execute tasks, from the most simple to most complex.

An example is Nvidia-like AAI, which has recently released new Fake AI/ML/DL software, hardware.

Conversational FAI framework. Real-time natural language understanding will transform how we interact with intelligent machines and applications

The framework enables developers to use pre trained deep learning models and software tools to create conversational AI services, such as advanced translation or transcription models, specialized chatbots and digital assistants.

FAI supercomputing. Nvidia introduced a new version of DGX SuperPod, Nvidia's supercomputing system that the vendor bills as made for AI workloads.

The new cloud-native DGX SuperPod uses Nvidia's Bluefield-2 DPUs, unveiled last year, providing users with a more secure connection to their data. Like Nvidia's other DGX SuperPods, this new supercomputer will contain at least 20 Nvidia DGX A100 systems and Nvidia InfiniBand HDR networking.

AI chipmakers can support three key AI workloads: data management, training and inferencing.

NVIDIA delivers GPU acceleration everywhere—to data centers, desktops, laptops, and the world’s fastest supercomputers. NVIDIA GPU deep learning is available on services from Amazon, Google, IBM, Microsoft, and many others.

The tech giants -- Google, Microsoft, Amazon, Apple and Facebook -- have also created chips made for Fake AI/ML/DL, but these are intended for their own specific applications

The Debate of Real AI vs. Fake AI

“I have always been convinced that the only way to get artificial intelligence to work is to do the computation in a way similar to the human brain. That is the goal I have been pursuing. We are making progress, though we still have lots to learn about how the brain actually works.” — Geoffrey Hinton

“My view is: throw it all away and start again.” — Geoffrey Hinton

Why so? A badly wrong assumption AI = ML = DL = Human Brain

The true paradigm is a RAI (Real Causal Explainable and "White Box" AI), or the real, true, genuine and autonomous cybernetic intelligence vs the extant fake, false and fictitious anthropomorphic intelligence.

An example of Real Intelligence is a hybrid human-AI Technology, Man-Machine Superintelligence, which is mostly unknown to the big tech and AAI army. Its goal is to mimic reality and mentality, human cognitive skills and functions, capacities and capabilities and activities, in computing machinery.

The Man-Machine Superintelligence as [RealAI] is all about 5 interrelated universes, as the key factors of global intelligent cyberspace:

  • reality/world/universe/nature/environment/spacetime, as the totality of all entities and relationships;
  • intelligence/intellect/mind/reasoning/understanding, as human minds and AI/ML models;
  • data/information/knowledge universe, as the world wide web; data points, data sets, big data, global data, world’s data; digital data, data types, structures, patterns and relationships; information space, information entities, common and scientific and technological knowledge;
  • software universe, as the web applications, application software and system software, source or machine codes, as AI/ML codes, programs, languages, libraries;
  • hardware universe, as the Internet, the IoT, CPUs, GPUs, AI/ML chips, digital platforms, supercomputers, quantum computers, cyber-physical networks, intelligent machinery and humans

How it is all represented, mapped, coded and processed in cyberspace/digital reality by computing machinery of any complexity, from smartphones to the internet of everything and beyond.

AI is the science and engineering of reality-mentality-virtuality [continuum] cyberspace, its nature, intelligent information entities, models, theories, algorithms, codes, architectures and applications.

Its subject is to develop the AI Cyberspace of physical, mental and digital worlds, the totality of any environments, physical, mental, digital or virtual, and application domains.

RAI is to represent and model, measure and compute, analyse, interpret, describe, predict, and prescribe, processing unlimited amounts of big data, transforming unstructured data (e.g. text, voice, etc.) to structured data ( e.g. categories, words, numbers, etc. ), discovering generalized laws and causal rules for the future.

RAI as a symbiotic hybrid human-machine superintelligence is to overrule the extant statistical narrow AI with its branches, such as machine learning, deep learning, machine vision, NLP, cognitive computing, etc.

Artificial Intelligence Opportunities and Risks

Artificial intelligence presents an existential danger to humanity if it progresses as it is, as specialized superhuman automated machine learning systems, from task-specific cognitive robots to professional bots to self-driving autonomous transport.

As mentioned, the list of those who have pointed to the risks of fake AI includes Alan Turing, Norbert Wiener, I.J. Good, Marvin Minsky, Elon Musk, Professor Stephen Hawking and Microsoft co-founder Bill Gates.

Risks of Non-Real Artificial Intelligence

  • AI, Robotics and Automation-spurred massive job loss.
  • Privacy violations, a handful of big tech AI companies control billions of minds every day manipulating people’s attention, opinions, emotions, decisions, and behaviors with personalized information.
  • 'Deepfakes'
  • Algorithmic bias caused by bad data.
  • Socioeconomic inequality.
  • Weapons automatization, AI-powered weaponry, a global AI arms race .
  • Malicious use of AI, threatening digital security (e.g. through criminals training machines to hack or socially engineer victims at human or superhuman levels of performance), physical security (e.g. non-state actors weaponizing consumer drones), political security (e.g. through privacy-eliminating surveillance, profiling, and repression, or through automated and targeted disinformation campaigns), and social security (misuse of facial recognition technology in offices, schools and other venues).
  • Human brains hacking and replacement.
  • Destructive superintelligence — aka artificial general human-like intelligence

“Businesses investing in the current form of machine learning (ML), e.g. AutoML, have just been paying to automate a process that fits curves to data without an understanding of the real world. They are effectively driving forward by looking in the rear-view mirror,” says causaLens CEO Darko Matovski.

Anthropomorphism in AI, the attribution of human-like feelings, mental states, and behavioral characteristics to computing machinery, unlike inanimate objects, animals, natural phenomena and supernatural entities, is an immoral enterprise.

For all AAI applications, it implies building AI systems based on embedded ethical principles, trust and transparency. This results in all sorts of Framework of ethical aspects of artificial intelligence, robotics and related technologies, as proposed by the EC, EU guidelines on ethics in artificial intelligence: Context and implementation.

Such a human-centric approach to human-like artificial intelligence... "highlights a number of ethical, legal and economic concerns, relating primarily to the risks facing human rights and fundamental freedoms. For instance, AI poses risks to the right to personal data protection and privacy, and equally so a risk of discrimination when algorithms are used for purposes such as to profile people or to resolve situations in criminal justice. There are also some concerns about the impact of AI technologies and robotics on the labour market (e.g. jobs being destroyed by automation). Furthermore, there are calls to assess the impact of algorithms and automated decision-making systems (ADMS) in the context of defective products (safety and liability), digital currency (blockchain), disinformation-spreading (fake news) and the potential military application of algorithms (autonomous weapons systems and cybersecurity). Finally, the question of how to develop ethical principles in algorithms and AI design".

Now, as to Ethics guidelines for trustworthy AI, trustworthy AI should be:

(1) lawful - respecting all applicable laws and regulations

(2) ethical - respecting ethical principles and values

(3) robust - both from a technical perspective while taking into account its social environment

Again, all the mess-up starts from its partial human-centric definition:

“Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals. AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications).”

Or, updated definition of AI. “Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions. As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems).” 

Why creating ML/AI based on human brains is not moral, but it will never be successful could be seen from the deep learning NNs, limitations and challenges, as indicated below:

DL neural nets are good at classification and clustering of data, but they are not great at other decision-making or learning scenarios such as deduction and reasoning.

The biggest limitation of deep learning models is they learn through observations. This means they only know what was in the data on which they trained. If a user has a small amount of data or it comes from one specific source that is not necessarily representative of the broader functional area, the models will not learn in a way that is generalizable.

The hardware requirements for deep learning models can also create limitations. Multicore high-performing graphics processing units (GPUs) and other similar processing units are required to ensure improved efficiency and decreased time consumption. However, these units are expensive and use large amounts of energy. Other hardware requirements include random access memory and a hard disk drive (HDD) or RAM-based solid-state drive (SSD).

Other limitations and challenges include the following:

Deep learning requires large amounts of data. Furthermore, the more powerful and accurate models will need more parameters, which, in turn, require more data.

Once trained, deep learning models become inflexible and cannot handle multitasking. They can deliver efficient and accurate solutions but only to one specific problem. Even solving a similar problem would require retraining the system.

There are many kinds of neural networks that form a sort of "zoo" with lots of different species and creatures for various specialized tasks. There are neural networks such as FFNNs, RNNs, CNNs, Boltzmann machines, belief networks, Hopfield networks, deep residual networks and other various types that can learn different kinds of tasks with different levels of performance.

Another major downside is that neural networks are a "black box" -- it's not possible to examine how a particular input leads to an output in any sort of explainable or transparent way. For applications that require root-cause analysis or a cause-effect explanation chain, this makes neural networks not a viable solution. For these situations where major decisions must be supported by explanations, "black box" technology is not always appropriate or allowed.

Any application that requires reasoning -- such as programming or applying the scientific method -- long-term planning and algorithms like data manipulation is completely beyond what current deep learning techniques can do, even with large data.

It might be a combination of supervised, unsupervised and reinforcement learning that pushes the next breakthrough in AI forward.

Deep learning examples. Because deep learning models process information in ways similar to the human brain, they can be applied to many tasks people do. Deep learning is currently used in most common image recognition tools, natural language processing (NLP) and speech recognition software. These tools are starting to appear in applications as diverse as self-driving cars and language translation services.

Use cases today for deep learning include all types of big data analytics applications, especially those focused on NLP, language translation, medical diagnosis, stock market trading signals, network security and image recognition.

Machine Intelligence and Causation

It is widely recognized that understanding causality Is the next challenge for machine learning. Deep neural nets do not interpret cause-and effect, or why the associations and correlations exist.

Teaching machines to know/understand "why" is to transfer their causal data patterns [knowledge, rules, laws, generalizations] to other environments.

A causal AI platform, keeping the advantages of comprehensive digitization and automation – one of the key benefits of machine learning – allowing zetabytes of datasets to be cleaned, sorted and monitored simultaneously, is to combine all this data with causal data models and causal/explainable insights – traditionally the sole competency of domain experts.

Real AI as a Global Predictor

Causal AI machine is powerful not only to explain, but to predict, forecast or estimate [causal] timelines of events, human life, societies, nature and the universe.

Today, the future is largely unpredictable, we are even unaware of the number of epidemic ways to come.

On a small practical scale, we have predictive analytics with its machine deep learning models, exploiting pattern recognition, to analyze current and historical facts to make predictions about the future.

As part of predictive techniques, there are a lot of forecasting methods, qualitative and quantitative, added up with strategic foresight or futures [futures studies, futures research or futurology] with no big utility. It all revolves around pattern-based understanding of past and present, thus and to explore the possibility of future events and trends.

On a large scale, the nature timeline is a big mystery even afterwards, not mentioning the universe timeline, where we have poor ideas about its history, far future and ultimate fate.

Summing up

Real [Causal] AI as a Synthesized Man-Machine Intelligence and Learning (MIL) is one of the greatest strategic innovations in all human history. It is fast emerging as an integrating general purpose technology (GPT) embracing all the traditional GPTs, as electricity, computing, and the internet/WWW, as well as the emerging technologies, Big Data, Cloud and Edge Computing, ML, DL, Robotics, Smart Automation, the Internet of Things, biometrics, AR (augmented reality)/VR (virtual reality), blockchain, NLP (natural language processing), quantum computing, 5-6G, bio-, neuro-, nano-, cognitive and social networks technologies.

But today's narrow, weak and automated AI of Machine Learning and Deep Learning, as implementing human brains/mind/intelligence in machines that sense, understand, think, learn, and behave like humans, is an existential threat to the human race by its anthropic conception and technology, strategy and policy.

The Real AI is to merge Artificial Intelligence (Weak AI, General AI, Strong AI and ASI) and Machine Learning (Supervised learning, Unsupervised learning, Reinforcement learning or Lifelong learning) as the most disruptive technologies for creating real-world man-machine super-intelligent systems.

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • No comments found

Share this article

Azamat Abdoullaev

Tech Expert

Azamat Abdoullaev is a leading ontologist and theoretical physicist who introduced a universal world model as a standard ontology/semantics for human beings and computing machines. He holds a Ph.D. in mathematics and theoretical physics. 

   
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline