Artificial Intelligence vs Machine Learning vs Artificial Neural Networks vs Deep Learning

Artificial Intelligence vs Machine Learning vs Artificial Neural Networks vs Deep Learning

Clearing the Confusion of Artificial Intelligence, Machine Learning, Artificial Neural Network and Deep Learning

Artificial intelligence (AI), machine learning (ML), artificial neural networks (ANN) and deep learning (DL) are usually used interchangeably, but they do not quite refer to the same things.

AI_Complete_Graph.jpeg

Source: HPE

What is Artificial Intelligence?

Artificial intelligence applies to computing systems designed to perform tasks usually reserved for human intelligence using logic, if-then rules, and decision trees. AI recognizes patterns from vast amounts of quality data providing insights, predicting outcomes, and making complex decisions.

What is Maching Learning?

Machine learning is a subset of AI that utilizes advanced statistical techniques to enable computing systems to improve at tasks with experience over time. Chatbots like Amazon’s Alexa and Apple’s Siri improve every year thanks to constant use by consumers coupled with the machine learning that takes place in the background.

What is an Artificial Neural Network?

An artificial neural network (ANN) is a computational model that mimics the way nerve cells work in the human brain. It is designed to simulate the way the human brain analyzes and processes information.

What is Deep Learning?

Deep learning is a subset of machine learning that uses advanced algorithms to enable an AI system to train itself to perform tasks by exposing multilayered neural networks to vast amounts of data. It then uses what it learns to recognize new patterns contained in the data. Learning can be human-supervised learningunsupervised learning, and/or reinforcement learning, like Google used with DeepMind to learn how to beat humans at the game Go.

What's Fundamentally Wrong with Today's AI?

Today’s artificial intelligence (AI) is limited. It still has a long way to go as it is limited in any form, sort or variety:

  • Narrow, general or super AI;
  • Embedded or trustworthy AI;
  • Cloud AI or edge AI;
  • Cognitive computing or AI chips;
  • Machine learning, deep learning or artificial neural networks;
  • AI platforms, tools, services, applications

They all are somehow mindlessly engaged in copycatting some parts of human intelligence, cognition or behavior, showing zero mind, intellect or understanding.

A true goal of Machine Intelligence and Learning is not to equal or exceed human intelligence, but to become the last and the best of “general purpose technologies” (GPTs).

GPTs are technologies that can affect an entire economy at a global level, revolutionizing societies through their impact on pre-existing economic and social structures.

GPTs' examples: the steam engine, railroad, interchangeable parts and mass production, electricity, electronics, material handling, mechanization, nuclear energy control theory (automation), the automobile, the computer, the Internet, medicine, space industries, robotics, software automation and artificial intelligence.

The four most important GPTs of the last two centuries were the steam engine, electric power, information technology (IT), and general artificial intelligence (gAI).

And the time between invention and implementation has been shrinking, cutting in half with each GPT wave. The time between invention and widespread use for the steam engine was about 80 years; 40 years for electricity, and about 20 years for IT

AI_ML_DL.png

Source: Comin and Mestieri (2017)

Now the implementation lag for the MIL-GPT technologies will be about 5 years.

“My assessment about why A.I. is overlooked by very smart people is that very smart people do not think a computer can ever be as smart as they are. And this is hubris and obviously false. Working with A.I. at Tesla lets me say with confidence “that we’re headed toward a situation where A.I. is vastly smarter than humans and I think that time frame is less than five years from now. But that doesn’t mean that everything goes to hell in five years. It just means that things get unstable or weird.”

Elon Musk

The Fourth Industrial Revolution

The First Industrial Revolution used water and steam power to mechanize production. The Second Industrial Revolution used electric power to create mass production. The Third Industrial Revolution used electronics and information technology to automate production.

Now a Fourth Industrial Revolution is building on the Third, the digital revolution that has been occurring since the middle of the last century.

The COVID-19 serves as a boundary line between the Industry 3.0 and the Industry 4.0, which is marked by velocity, scope, and systems impact.

The Fourth Industrial Revolution is a fusion of advances in AI, robotics, the Internet of Things (IoT), 3D printing, genetic engineering, quantum computing, and other technologies, all as an Integrated Human-Machine Intelligence and Learning, HMIL, or Global Intelligence:

HMIL = AI + ML + DL + NLU + 6G+ Bio-, Nano-, Cognitive engineering + Robotics + SC, QC + the Internet of Everything + MME, BCE + Human Minds = Encyclopedic Intelligence = Real I = Global [Human Mind - Machine] Intelligence = Global Supermind

Now let's study what is an anthropocentric AI, its upsides and downsides, and why Why We Need to Kill the Confusing Constructs “Artificial Intelligence”, "Machine Learning", and "Deep Neural Network", as a human-created road to human extinction:

AI_Robot.jpeg

Anthropomorphism in Computing Machinery

Anthropomorphism is the attribution of human traits, emotions, intentions and behavior to non-human entities, animate or inanimate, natural or artificial, being an innate tendency of human psychology.

It is generally defined as "the attribution of distinctively human-like feelings, mental states, and behavioral characteristics to inanimate objects, animals, and in general to natural phenomena and supernatural entities".

Anthropomorphism in Computing Machinery is now 70-years-old scientific question since Turing’s “Computing Machinery and Intelligence’ article, published in Mind Journal in 1950; Mind, Volume LIX, Issue 236, October 1950, Pages 433–460 

In his imitation game for thinking machines, Turing suggested a possibility for all-equivalent digital electronic computers mimicking ‘discrete state machines’ consisting of four elements:

  • Store/Memory [NAND flash memory];
  • Executive unit, [CPU, GPU, AI PU, artificial neural networks];
  • Control, operating systems, [software or brainware];
  • Data, information, knowledge [mindware or intelligence]

To program a computer to carry out intellectual functions means to put the appropriate instruction table into the machine, as a proper intelligent programming, coding data types and data structures or data patterns or algorithms, thinking that human-level intelligence is the best standard for intelligence.

Anthropomorphism in Brain Inspired AI Hardware

So, when you read "Nvidia unveils monstrous A100 AI chip with 54 billion transistors and 5 petaflops of performance", see it as an empty hype and buzz-word.

There are no true real AI chips in existence today, but some sorts of "AAI chips", forming the brain of an AAI System, replacing CPUs and GPUs, where most progress has to be achieved.

While typically GPUs are better than CPUs when it comes to AI processing, they usually fail, being specialized in computer graphics and image processing, not neural networks.

The AAI industry needs specialised processors to enable efficient processing of AAI applications, modelling and inference. As a result, chip designers are now working to create specialized processing units.

These come under many names, such as NPU, TPU, DPU, SPU etc., but a catchall term can be the AAI processing unit (AAI PU), forming the brain of an AAI System on a chip (SoC).

It is also added with 1. the neural processing unit or the matrix multiplication engine where the core operations of an AAI SoC are carried out; 2. Controller processors, based on RISC-V, ARM, or custom-logic instruction set architectures (ISA) to control and communicate with all the other blocks and the external processor; 3. SRAM; 4. I/O; 5. the interconnect fabric between the processors (AAI PU, controllers) and all the other modules on the SoC.

The AAI PU was created to execute ML algorithms, typically by operating on predictive models such as artificial neural networks. They are usually classified as either training or inference generally performed independently.

AAI PUs are generally required for the following:

  • Accelerate the computation of ML tasks by several folds (nearly 10K times) as compared to GPUs
  • Consume low power and improve resource utilization for ML tasks as compared to GPUs and CPUs

Unlike CPUs and GPUs, the design of single-action AAI SoC is far from mature.

Specialized AI chips deal with specialized ANNs, and are designed to do two things with them: task-designed training and inference, only for facial recognition, gesture recognition, natural language processing, image searching, spam filtering, etc.

In all, there are {Cloud, Edge, Inference, Training} chips for AAI models of specific tasks. Examples of Cloud + Training chips include NVIDIA’s DGX-2 system, which totals 2 petaFLOPS of processing power, made up of 16 NVIDIA V100 Tensor Core GPUs, or Intel Habana’s Gaudi chip or Facebook photos or Google translate.

Sample chips here include Qualcomm’s Cloud AI 100, which are large chips used for AAI in massive cloud data centres. Another example is Alibaba’s Huanguang 800, or Graphcore’s Colossus MK2 GC200 IPU.

Now (Cloud + Inference) chips were used to train Facebook’s photos or Google Translate, to process the data you input using the models these companies created. Other examples include AAI chatbots or most AAI-powered services run by large technology companies. Here is also Qualcomm’s Cloud AI 100, which are large chips used for AAI in massive cloud data centres, Alibaba’s Huanguang 800, or Graphcore’s Colossus MK2 GC200 IPU.

(Edge + Inference) on-device chips examples include Kneron’s own chips, including the KL520 and recently launched KL720 chip, which are lower-power, cost-efficient chips designed for on-device use; Intel Movidius and Google’s Coral TPU.

All of these different types of chips, training or inference, and their different implementations, models, and use cases are expected to develop the AAI of Things (AAIoT) future.

Human-Machine Intelligence is not mere intelligent hardware, but software or brainware, and, especially, mindware or dataware, in the first place.

Anthropomorphism in Brain Inspired AI Brainware

"An anthropomorphic framework is not necessary but often appears to underlie the claim that AI, particularly Deep Neural Network (DNN), is key to gaining a better understanding of how the human brain works; and in how enthusiastically the achievements of AI, especially of DNN, have been received.

DNN architecture represents one of the most advanced and promising fields within AI research. It is implemented in the majority of AI-related existing applications, including translation services for Google, facial recognition software for Facebook, and virtual assistants like Apple’s Siri. The widely hailed AlphaZero victory against the human Go world champion was the result of the application of DNN and reinforced learning. This success had a huge impact on people’s imagination, contributing to increasing the enthusiasm around AI uses to emulate and/or enhance human abilities.

And yet, caution is key. While actual artificial networks include many characteristics of neural computation, such as nonlinear transduction, divisive normalization, and maximum-base pooling of inputs, and they replicate the hierarchical organization of mammalian cortical systems, there are significant differences in structure. In a recent article, Ullman notes that almost everything we know about neurons (structure, types, interconnectivity) has not been incorporated in deep networks.

In particular, while the biological neuronal architecture is characterized by a heterogeneity of morphologies and functional connections, the actual DNN uses a limited set of highly simplified homogeneous artificial neurons. However, while Ullman provides a balanced analysis of the technology and calls for avoiding anthropomorphic interpretations of AI, his analysis at times suggests a subtle form of anthropomorphism, if not in the conceptual framework, at least in the language used. For instance, he wonders whether some aspects of the brain overlooked in actual AI might be key to reach Artificial General Intelligence (AGI), seemingly taking for granted (like the founders of AI) that the human brain is the (privileged) source of inspiration for AI, both as a model to emulate and a goal to achieve. Moreover, he refers to learning as the key problem of DNN and technically defines it as the adjustment of synapses to produce the desired outputs to their inputs. While he has a technical, non-biological definition of synapses (i.e., numbers in a matrix vs electrochemical connections among brain cells), the mere use of the term “synapse” might suggest an interpretation of AI as an emulation of biological nervous systems.

The problem with anthropomorphic language when discussing DNNs is that it risks masking important limitations intrinsic to DNN which make it fundamentally different from human intelligence. In addition to the issue of the lack of consciousness of DNN, which arguably is not just a matter of further development, but possibly of lacking the relevant architecture, there are significant differences between DNN and human intelligence. It is on the basis of such differences that it has been argued that DNNs can be described as brittle, inefficient, and myopic compared to the human brain. Brittle because what are known as “general adversarial networks” (GANs)–special DNNs developed to fool other DNNs- show that DNNs can be easily fooled through perturbation. This entails minimally altering the inputs (e.g., the pixels in an image), which results in outputs by the DNN that are completely wrong (e.g., misclassification of the image), showing that DNN lacks some crucial components of the human brain for perceiving the real world. Inefficient because in contrast to the human brain, current DNNs need a huge amount of training data to work. One of the main problems faced by AI researchers is how to make AI learning unsupervised, i.e., relatively independent from training data. This current limitation of AI might be related also to the fact that, while the human brain relies on genetic, “intrinsic” knowledge, DNNs lack it. Finally, DNNs are myopic because while refining their ability to discriminate between single objects they often fails to grasp their mutual relationship

In short, even though DNNs achieved impressive results in complex scene classification, as well as semantic labeling and verbal description of scenes, it seems reasonable to conclude that because they lack the crucial cognitive components of the human brain that enable it to make counterintuitive inferences and commonsense decisions the anthropomorphic hype around neural network algorithms and deep learning is overblown. DNNs do not recreate human intelligence: they introduce a new mode of inference that is better than ours in some respects and worse in others".

Anthropomorphism in Brain Inspired Technological Singularity

Technological singularity is another anthropomorphic techno-fiction, provoked by the Good's speculations, as if the first human-like ultra-intelligent machine were the last human invention.

What is indeed under an incessant study, development and deployment is NOW emerging as a real and true AI, as Human-Machine Intelligence and Learning, HMIL, or Global Human-Digital Intelligence,

There is no existential threat to humanity, as soon as human minds to be integrated in the global supermind bio-digital fusion, like as pictured below:

Singularity.jpeg

Conclusion

Thus, the main purpose is to provide a conceptual framework to define Human-Machine Intelligence and Learning (HMIL), as Global Intelligence. And the first step to create HMIL is to understand its nature or concept against main research questions (why, what, who, when, where, how).

We can describe developments in MI as “more profound than fire or electricity”, as the topmost general purpose technology.

All what we need is to disrupt the current narrow anthropocentric AI with its branches, as ML and DL, as well as DS and SE, with Human-Machine Intelligence and Learning, HMIL, or Global AI:

HMIL = AI + ML + DL + NLU + 6G+ Bio-, Nano-, Cognitive engineering + Robotics + SC, QC + the Internet of Everything + MME, BCE + Human Minds = Encyclopedic Intelligence = Real I = Global [Human Mind - Machine] Intelligence = Global Supermind

The Human-Machine Intelligence and Learning (HMIL) Systems are to integrate human minds and Intelligent Machinery, in all its sorts and forms, AAI, ML, DL, MMEs, and the IoE.

Human and machine powers are most productively harnessed by designing hybrid human-AI machine networks in which each party complements each other’s strengths and counterbalances each other’s weaknesses.

So, describe artificial intelligence (AI) to the general public as an augmented artificial intelligence (AAI), what is best for the public goods.

One just should see AI/ML/ANN/DL as augmented intelligence, predictive analytics, automated software or advanced statistics, not artificial intelligence or machine intelligence, machine learning or deep learning.

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • No comments found

Share this article

Azamat Abdoullaev

Tech Expert

Azamat Abdoullaev is a leading ontologist and theoretical physicist who introduced a universal world model as a standard ontology/semantics for human beings and computing machines. He holds a Ph.D. in mathematics and theoretical physics. 

   
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline