Why Global Artificial Intelligence is the Next Big Thing

Why Global Artificial Intelligence is the Next Big Thing

Why Global Artificial Intelligence is the Next Big Thing

Global Artificial Intelligence is emerging as the most disruptive technology of the 21st century.

It is fast growing as a digital general-purpose technology (GPT) for all the existing and emerging ones: the power engine, electricity, electronics, mechanization, control theory (automation), the automobile, the computer, the Internet, nanotechnology, biotechnology, robotics, and machine learning.

The combined value – to society and industry – of digital transformation across industries could be greater than $100 trillion over the next 10 years. “Combinatorial” effects of digital technologies – mobile, cloud, artificial intelligence, sensors and analytics among others – are accelerating progress exponentially, but the full potential will not be achieved without collaboration between business, policy-makers and NGOs.

Five top-performing tech stocks in the market, namely, Facebook, Amazon, Apple, Microsoft, and Alphabet’s Google, FAAMG, represent the U.S.'s Narrow AI technology leaders whose products span machine learning and deep learning or data analytics cloud platforms, mobile and desktop systems, hosting services, online operations, and software products. The five FAAMG companies had a joint market capitalization of around $4.5 trillion a year ago, and now exceed $7.6 trillion, being all within the top 10 companies in the US. As to the modest Gartner's predictions, the total NAI-derived business value is forecast to reach $3.9 trillion in 2022.

The Global AI (GAI) is integrating Narrow AI, Machine Learning, Deep Learning, Symbolic AI, General, Strong, or Human-level AI, and Superhuman AI, with advanced data analytics, computing ontologies, cybernetics and robotics. It relies on fundamental scientific world’s knowledge, computer science, mathematics, statistics, data science, psychology, linguistics, semantics, ontology, and  philosophy.

The GAI is designed as a Causal Machine Intelligence and Learning Platform, to be deployed as Artificial Intelligence for Everybody and Everything, AI4EE.

The GAI makes the most disruptive general- purpose technology of the 21st Century, with the development project cost of $1 trillion and the time lead of 5 years, given an effective ecosystem of innovative business, government, policy-makers, NGOs, international organizations, civil society, academia, media and the arts.

The AI4EE Platform could compute the real world as a whole and in parts [e.g., the causal nexus of various human domains, such as fire technology and human civilizations; globalization and political power; climate change and consumption; economic growth and ecological destruction; future economy, unemployment and global pandemic; wealth and corruption, perspectives on the world’s future, etc.]. It is critical in determining the perspectives on the country’s future as well, defining how to organize for the changes ahead in critical areas:

  • Preventing global pandemics, defeating COVID-19
  • Fostering economic competitiveness and social mobility
  • Advancing social, economic and racial equity
  • Building a low-carbon smart economy
  • Creating a sustainable, green, smart, inclusive future

There are regular sustaining, evolutionary and revolutionary, technologies, as well as highly disruptive technologies, innovations, and inventions. On the top, there are general purpose technologies, such as electricity or IT, that can affect the whole society and entire economy at the national and global levels.

Here are some examples of the most disruptive technologies at the present and next future, with suggested values over next 5-10 year:

  • Artificial Intelligence, $50 trillion
  • Machine Learning, Deep Learning, $5 trillion
  • Internet of Things (IoT), Smart Phones, $5 trillion
  • Robotics, $5 trillion
  • Mixed Reality, $5 trillion
  • Sustainable Energy, $5 trillion
  • Blockchain Technology, $5 trillion
  • 3-4D Printing, $5 trillion
  • Medical Innovations, $5 trillion
  • High-Speed Travel, $5 trillion
  • Space Exploration and Robotic Colonisation, $5 trillion

“AI” is becoming a construct that has been the subject of increasing attention in technology, media, business, industry, government and civil life during recent years.

Today's AI is the subject of controversy. You might have heard about narrow/weak, general/strong/human level and super artificial intelligence, or about machine learning, deep learning, reinforced learning, supervised and unsupervised learning, neural networks, Bayesian networks, NLP, and a whole lot of other confusing terms, all dubbed as AI techniques.

Many of the rules and logic-based systems that were previously considered Artificial Intelligence are no longer AI. In contrast, systems that analyze and find patterns in data are dubbed as machine learning, widely promoted as the dominant form of Narrow and Weak AI, which is imitating or mimicking human specific intelligent behavior or cognitive functions, capacities, capabilities and skills. Today’s AI/ML/DL is about the simulation, mimicking or imitating of human intelligence processes by machines, especially computer systems, with such specific applications as expert systems, natural language processing, speech recognition, machine vision, etc.

In general, Narrow ML/AI systems work by inputting large amounts of labelled training data, analyzing the data for correlations and patterns, and using these patterns to output predictions about future states. NAI requires a large corpora of examples by training its ML models to encode the general distribution of the problem into its parameters on a specialized hardware and software for writing and training machine learning algorithms in terms of high-level programming language such as Python, R or Java. But in the real world, distributions are often changeful, which cannot be considered and controlled in the training data. For instance, any DNNs trained on millions of images can easily fail or be compromised by adversarial cyberattacks when they see objects under minor changes in the environment, new lighting conditions or from slightly different angles or against new backgrounds.

One of the leading examples of such independent and identically distributed (i.i.d.) data -driven, numerical and statistical Narrow and Weak ML/AI is GPT-3 marked by 175 billion parameters/synapses, while the human brain has 86 billion neurons with 1000 trillion synapses at least.

The cost to train GPT 3with 175 billion parameters is US$ 4.6 million

Now how much will it cost to train a language model the size of the human brain?

GPT 4: (Human Brain) 100 trillion parameters: Cost 2020 US$2.6 billion; Cost 2024 US$325 million; Cost 2028 US$ 40 million; Cost 2032 US$ 5 million.

One can even reduce the cost of machine training over a long time, having all the data of the world, with its senseless data correlations and patterns. Still, without an effective intelligence it will never be transformed into information, not mentioning knowledge, understanding, learning or wisdom.

To generalize the Narrow Anthropomorphic AI approaches, we need to design, develop, deploy and distribute a REALITY-based AI which is generalizing a BRAIN/MIND-based AI. It could be operationalized as a synergetic man-machine digital superintelligence [digital supermind] which is modeling and simulating the world to effectively and sustainably interact with any environments, physical, natural, mental, social, digital or virtual. This is a general definition covering any intelligent systems of any complexity, human, machine, or hypothetical alien intelligences.

The heart and soul or engine of the AI4EE Platform consists in the consistent, coherent and comprehensive causal model of the world.

For example, if the ML/NAI/DL Cloud Platforms of Amazon, Google, or Facebook are running deep neural networks, then they are deficient of its neocortex, neopallium, isocortex, or the six-layered cortex, a set of layers of the mammalian cerebral cortex involved in higher-order brain functions such as sensory perception, cognition, conscious thought, motor control and commands, spatial reasoning and language understanding and generation.

The role of “electronic digital neocortex” could be played by the AI4EE as a Digital Superintelligence Layer, advanced by Neuralink’s CEO, E. Musk, as

Overall, the Global AI4EE model underlies an increasing number of R&D of linear specific/inductive/bottom-up/space-time causality models created in the narrow context of a statistical Narrow AI and ML, as sampled below:

  • Judea Pearl and Dana Mackenzie’s The Book of Why. The New Science of Cause and Effect
  • Introduction to Causality in Machine Learning
  • Eight myths about Causality and Structural Equation Models
  • Deep learning could reveal why the world works the way it does
  • To Build Truly Intelligent Machines, Teach Them Cause and Effect
  • Causal Inference in Machine Learning
  • Causal Bayesian Networks: A flexible tool to enable fairer machine learning
  • Causality in machine learning
  • Bayesian Networks and the search for Causality
  • The Case for Causal AI
  • Causal deep learning teaches AI to ask why

Due to the narrow causal AI assumptions, among other principal things, as generalization and transfer, they lack a much deeper research in, as it was noted in the article, Towards Causal Representation Learning:

a) Learning Non-Linear Causal Relations at Scale (1) understanding under which conditions nonlinear causal relations can be learned; (2) which training frameworks allow to best exploit the scalability of machine learning approaches; and (3) providing compelling evidence on the advantages over (noncausal) statistical representations in terms of generalization, repurposing, and transfer of causal modules on real-world tasks. b) Learning Causal Variables. causal representation learning, the discovery of high-level causal variables from low-level observations; c) Understanding the Biases of Existing Deep Learning Approaches; d) Learning Causally Correct Models of the World and the Agent. Building a causal description for both a model of the agent and the environment (world models) for robust and versatile model-based reinforcement learning.

The Causal Machine Intelligence and Learning involves the deep learning causal cycle of World [Environments, Domains, Situations], Data [Perception, Percepts, Sensors], Information [Cognition, Memory], Knowledge [Learning, Thinking, Reasoning], Wisdom [Learning, Understanding, Decision], Interaction [Action, Behavior, Actuation, Adaptation, Change], and World…

In essence, the Global AI modelling should consist of the following:

Basic Assumptions: prior knowledge, the basis of our knowing, understanding, or thinking about a world or a problem (primary causes, principles and elements).

Global Model: the consistent, comprehensive and coherent representation of the world, its general views and key assumptions in a way to automatically reason (i.e., conceptual, ontological/causal, logical, scientific, or mathematical/statistic models, as an equation or a simulation or the neural network model of pictures and words).

Global Data: what we measure, calculate, observe or learn about the real world (facts and statistics; variables and values; systematized as world’s data, information and knowledge).

Causal Master Algorithms (integrating ruled-based and statistical learning algorithms, models, programs and techniques, as machine learning, deep learning or deep neural networks)

General AI Software/Hardware (AI Chips, AI Supercomputers, Cloud AI, Edge AI, AI Internet of Everything [Things, People, Processes and Services])

Universal Artificial Intelligence, and How Much It Might Cost a Real AI Model

The Ladder of Reality, Causality and Mentality, Science and Technology, Human Intelligence and Non-Human Intelligence (AI)

Still, causation needs to be distinguished from mere dependence, association, correlation, or statistic relationship – the link between two variables, as in the AA ladder of CausalWorld:

  • Chance, statistic associations, causation as a statistic correlation between cause and effect, correlations (random processes, variables, stochastic errors), random data patterns, observations, Hume's observation of regularities, Karl Pearson's causes have no place in science, Russell's “the law of causality” a “relic of a bygone age” /Observational Big Data Science/Statistics Physics/Statistic AI/ML/DL [“The End of Theory: The Data Deluge Makes the Scientific Method Obsolete”]
  • Common-effect relationships, bias (systematic error, as sharing a common effect, collider)/Statistics, Empirical Sciences
  • Common-cause relationships, confounding (a common cause, confounder)/Statistics, Empirical Sciences
  • Causal links, chains, causal nexus of causes and effects (material, formal, efficient and final causes; probabilistic causality, P(E|C) > P(E|not-C), doing and interventions, counterfactual conditionals,"if the first object had not been, the second had never existed", linear, chain, probabilistic or regression causality)/Experimental Science/Causal AI
  • Reverse, reactive, reaction, reflexive, retroactive, reactionary, responsive, retrospective or inverse causality, as reversed or returned action, contrary action or reversed effects due to a stimulus, reflex action, inverse probabilistic causality, P(C|E) > P(C|not-E), as in social, biological, chemical, physiological, psychological and physical processes/Experimental Science
  • Interaction, real causality, interactive causation, self-caused cause, causa sui, causal interactions: true, reciprocal, circular, reinforcing, cyclical, cybernetic, feedback, nonlinear deep-level causality, universal causal networks, as embedded in social, biological, chemical, physiological, psychological and physical processes/Real Science/Real AI/Real World/the level of deep philosophy, scientific discovery, and technological innovation

The Six Layer Causal Hierarchy defines the Ladder of Reality, Causality and Mentality, Science and Technology, Human Intelligence and Non-Human Intelligence (MI or AI).

The CausalWorld [levels of causation] is a basis for all real world constructs, as power, force and interactions, agents and substances, states and conditions and situations, events, actions and changes, processes and relations; causality and causation, causal models, causal systems, causal processes, causal mechanisms, causal patterns, causal data or information, causal codes, programs, algorithms, causal analysis, causal reasoning, causal inference, or causal graphs (path diagrams, causal Bayesian networks or DAGs). 

It fully reviews a causal graph analysis having a critical importance in data science and data-generated processes, medical and social research and public policy evaluation, statistics, econometrics, epidemiology, genetics and related disciplines.

The CausalWorld model covers Pearl's statistic linear causal metamodel, as the ladder of causation: Association (seeing/observing), entails the sensing of regularities or patterns in the input data, expressed as correlations; Intervention (doing), predicts the effects of deliberate actions, expressed as causal relationships; Counterfactuals, involves constructing a theory of (part of) the world that explains why specific actions have specific effects and what happens in the absence of such actions.

It must be noted that any causal inference statistics or AI models relying on "the ladder of causality" [The Book of Why: The New Science of Cause and Effect] are still fundamentally defective for missing the key levels of real nonlinear causality of the Six Layer Causal Hierarchy.

It subsumes causal models (or structural causal models) as conceptual modelling describing the causal mechanisms of causal networks, systems or processes, formalized as mathematical models representing causal relationships, an ordered triple {C, E, M} , where C is a set of [exogenous] causal variables whose values are determined by factors outside the model; E is a set of [endogenous] causal variables whose values are determined by factors within the model; and M is a set of structural equations that express the value of each endogenous variable as a function of the values of the other variables in C and E.

The causal model allows intervention studies, such as randomized controlled trials, RCT (e.g. a clinical trial) to reduce biases when testing the effectiveness of the treatment-outcome process.

Besides, it covers such special things as causal situational awareness (CSA), understanding, sense-making or assessment, "the perception of environmental elements and events with respect to time or space, the comprehension of their meaning, and the projection of their future status".

The CSA is a critical foundation for causal decision-making across a broad range of situations, as the protection of human life and property, law enforcement, aviation, air traffic control, ship navigation, health care, emergency response, military command and control operations, self defense, offshore oil, nuclear power plant management, urban development, and other real-world situations.

In all, the AA ladder of CausalWorld with interactive causation and reversible causality covers all products of the human mind, including science, mathematics, engineering, technology, philosophy, and art:

  • Science, Scientific method, Scientific modelling: Causal Science, Real Science
  • Mathematics: Real, Causal Mathematics
  • Probability theory and Statistics: Causal Statistics, Real Statistics, Cstatistics
  • Cybernetics 2.0 is engaged in the study of the circular causal and feedback mechanisms in any systems, natural, social or informational
  • Computer science, AI, ML and DL, as Causal/Explainable AI, Real AI (RAI)

The Real AI Fundamentals: The Causal World of Science and Technology

"The law of causation, according to which later events can theoretically be predicted by means of earlier events, has often been held to be a priori, a necessity of thought, a category without which science would not be possible." (Russell, External World p.179).

[Real] Causality is a key notion in the separation of science from non-science and pseudo-science, "consisting of statements, beliefs, or practices that claim to be both scientific and factual but are incompatible with the causal method". Replacing falsifiability or verifiability, the causalism principle or the causal criterion of meaning maintains that only statements that are causally verifiable are cognitively meaningful, or else they are items of non-science.

The demarcation between science and pseudoscience has philosophical, political, economic, scientific and technological implications. Distinguishing real facts and causal theories from modern pseudoscientific beliefs, as it is/was with astrology, alchemy, alternative medicine, occult beliefs, and creation science, is part of real science education and literacy.

REAL SCIENCE DEALS ESSENTIALLY WITH THE CAUSE-EFFECT-CAUSE INTERRELATIONSHIPS, CAUSAL INTERACTIONS OF WORLD'S PHENOMENA.

Science studies causal regularities in the relations and order of phenomena in the world.

The extant definitions out of real causation are no more valid, outdated and liable to falsification as pseudoscience:

"Science (from the Latin word scientia, meaning "knowledge") is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe".

"Science, any system of knowledge that is concerned with the physical world and its phenomena and that entails unbiased observations and systematic experimentation. In general, a science involves a pursuit of knowledge covering general truths or the operations of fundamental laws".

"Science is the intellectual and practical activity encompassing the systematic study of the structure and behaviour of the physical and natural world through observation and experiment".

[Real] Science is a systematic enterprise that builds and organizes the world's data, information or knowledge in the form of causal laws, explanations and predictions about the world and its phenomena.

Real science is divided into the natural sciences (e.g., biology, chemistry, and physics), which study causal interactions and patterns in nature; the cognitive sciences (linguistics, neuroscience, artificial intelligence, philosophy, anthropology, and psychology); the social sciences (e.g., economics, politics, and sociology), which study causal interactions in individuals and societies; technological sciences (applied sciences, engineering and medicine); the formal sciences (e.g., logic, mathematics, and theoretical computer science). Scientific research involves using the causal scientific method, which seeks to causally explain the events of nature in a reproducible way.

Observation and Experimentation, Simulation and Modelling, all are important in Real Science to help establish dynamic causal relationships (to avoid the correlation fallacy).

Formally, the human-AI world is modeled as a global cyclical symmetric graph/network of virtually unlimited nodes/units (entity variables, data, particles, atoms, neurons, minds, organizations, etc.) and virtually unlimited interrelationships (real data patterns) among them.

It could be approximated with Undirected Graphical Models (UGM), Markov and Bayesian Network Models, Computer Networks, Social Networks, Biological Neural Networks, or Artificial neural networks (ANNs).

A Bayesian network (a Bayes network, belief network, or decision network; an influence diagram (ID) (a relevance diagram, decision diagram or a decision network), neural networks (natural and artificial), path analysis (statistics), Structural equation modeling (SEM), mathematical models, computer algorithms, statistical methods, regression analysis, causal inference, causal networks, knowledge graphs, semantic networks, scientific knowledge, methods and modeling and technology, all could and should represent the causal relationships, its networks, mechanisms, laws and rules. 

Neural_Networks.png

As an example, the NNs topologies, compiled by Fjodor van Veen from Asimov institute, are getting a real substance, but as causal NNs, as below: 

The Causal Structure Learning and Causal Inference of the computational networks are the subject of the Real AI/ML/DL platform.

True and Real AI embraces a Causal AI/ML/DL, operating with causal information, instead of statistical data patterns, performing causal inferences about causal relationships from [statistical] data, in the first place.

Current state-of-the-art correlation-based machine learning and deep learning have severe limitations in real dynamic environments failing to unlock the true potential of AI for humanity.

Real AI is a new category of intelligent machines that understand reality, both objective and subjective and mixed realities, its complex cause and effect relationships― a critical step towards true real AI vs. Imitating, nonreal AI.

Real AI vs Non Real AI

There is a bad misunderstanding about AI, about its real nature and true definition, much promoted by the mass media and popular culture, as strong or general, autonomous and conscious human-like AI (HAI) opposed to humans.

Following two periods of development (1940 and 1960, 1980 and 1990) interspaced with several cold “winters”, HAI received a new bloom in 2010 thanks to statistical learning algorithms, massive volumes of data and the discovery of the very high efficiency of GPUs to accelerate the calculation of learning algorithms.

Converging sciences, methods, theories and techniques (including philosophy, semantics, natural language, science, mathematics, logic, statistics, probabilities, computational neurobiology, computer science and information engineering), a real and true AI is downgraded to the HAI status, aiming ‘to achieve the imitation by a machine of the cognitive abilities of a human being”.

Below are some samples of mainstream human-like, anthropomorphic AI definitions from IBM, SAS, Britannica, Investopedia, and the EC, with its contradictory divisions of Narrow and Weak, Strong and General, or Superhuman AI, all compared with the pioneering definition.

AI leverages computers and machines to mimic the problem-solving and decision-making capabilities of the human mind (IBM).

AI makes it possible for machines to learn from experience, adjust to new inputs and perform human-like tasks (SAS).

The term AI is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past experience (Britannica).

AI refers to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions (Investopedia).

Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals. AI-based systems can be purely software-based, acting in the virtual world (e.g., voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications).” (EC)

John McCarthy offered the most open definition in the 2004 paper, WHAT IS ARTIFICIAL INTELLIGENCE?

" It is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable."

"While biological intelligent systems (including humans) have developed sophisticated abilities through interaction, evolution and learning in order to act successfully in our world, our understanding of these phenomena is still limited. The synthesis of intelligent, autonomous, learning systems remains a major scientific challenge". 

AI is not all about simulating human intelligence or biological intelligent behavior. A solid definition of intelligence doesn’t depend on relating it to human intelligence only.

“The most work in AI involves studying the problems the world presents to intelligence rather than studying people or animals”. The aim is to create powerful intelligent systems operating autonomously in, and adapt to, any complex, changing environments, physical or digital or virtual.

So, classifying AI as narrow, general or superhuman, with sentience and consciousness, comes from its fuzzy, confusing and misleading definition.

Today's AI is too BRAIN-minded, anthropomorphic, futile and dangerous construct, in any its forms, sorts and varieties:

  • Narrow, general or super AI;
  • Embedded or trustworthy AI;
  • Cloud AI or edge AI;
  • Cognitive computing or AI chips;
  • Machine learning, deep learning or artificial neural networks;
  • AI platforms, tools, services, applications;
  • AI industry, FBSI, Healthcare, Telecommunications, Transport, Education, Government.

They all are somehow mindlessly engaged in copycatting some parts of human intelligence, cognition or behavior, showing zero mind, intellect or understanding.

Most of the History of AI Research looks like a History of Fake AAI Research.

Of all the programmable functions of AI systems, perception, cognition, planning, learning, reasoning, problem solving, understanding, intellect, volition, and decision making, Narrow/Weak AI operates under a narrow set of constraints and limitations. Narrow AI doesn’t mimic or replicate human intelligence, but mimics, fakes or simulates only specific human behaviour based on a narrow range of parameters and contexts.

It is to perform only singular tasks, be it facial recognition, or speech recognition/voice assistants, or driving a car, or searching the internet.

Anthropomorphic AI (AAI), the attribution of human-like feelings, mental states, and behavioral characteristics to computing machinery, unlike inanimate objects, animals, natural phenomena and supernatural entities, is the most conceptually, morally and legally risky part of the whole enterprise.

For all AAI applications, it implies building AI systems based on embedded ethical principles, trust and transparency. This results in all sorts of Framework of ethical aspects of artificial intelligence, robotics and related technologies, as proposed by the EC, EU guidelines on ethics in artificial intelligence: Context and implementation.

Other well-known risks of non-real AAI include:

  • AI, Robotics and Automation-spurred massive job loss.
  • Privacy violations, a handful of big tech AI companies control billions of minds every day manipulating people’s attention, opinions, emotions, decisions, and behaviors with personalized information.
  • 'Deep Fakes'
  • Algorithmic bias caused by bad data.
  • Socioeconomic inequality.
  • Weapons automatization, AI-powered weaponry, a global AI arms race .
  • Malicious use of AI, threatening digital security (e.g. through criminals training machines to hack or socially engineer victims at human or superhuman levels of performance), physical security (e.g. non-state actors weaponizing consumer drones), political security (e.g. through privacy-eliminating surveillance, profiling, and repression, or through automated and targeted disinformation campaigns), and social security (misuse of facial recognition technology in offices, schools and other venues).
  • Human brains hacking and replacement.
  • Destructive superintelligence — aka artificial general human-like intelligence

“Businesses investing in the current form of machine learning (ML), e.g. AutoML, have just been paying to automate a process that fits curves to data without an understanding of the real world. They are effectively driving forward by looking in the rear-view mirror,” says causaLens CEO Darko Matovski.

Examples of narrow and weak AI:

  • Rankbrain by Google / Google Search.
  • Siri by Apple, Alexa by Amazon, Cortana by Microsoft and other virtual assistants.
  • IBM's Watson.
  • Image / facial recognition software.
  • Disease mapping and prediction tools.
  • Manufacturing and drone robots.
  • Email spam filters / social media monitoring tools for dangerous content
  • Entertainment or marketing content recommendations based on watch/listen/purchase behaviour
  • Self-driving cars, etc.

A true goal of Machine Intelligence and Learning is not to equal or exceed human intelligence, but to become the last and the best of “general purpose technologies” (GPTs).

GPTs are technologies that can affect an entire economy at a global level, revolutionizing societies through their impact on pre-existing economic and social structures.

GPTs' examples: the steam engine, railroad, interchangeable parts and mass production, electricity, electronics, material handling, mechanization, nuclear energy control theory (automation), the automobile, the computer, the Internet, medicine, space industries, robotics, software automation and artificial intelligence.

The four most important GPTs of the last two centuries were the steam engine, electric power, information technology (IT), and general artificial intelligence (gAI).

And the time between invention and implementation has been shrinking, cutting in half with each GPT wave. The time between invention and widespread use for the steam engine was about 80 years; 40 years for electricity, and about 20 years for IT

Source: Comin and Mestieri (2017).

Now the implementation lag for the MIL-GPT technologies will be about 5 years.

“My assessment about why A.I. is overlooked by very smart people is that very smart people do not think a computer can ever be as smart as they are. And this is hubris and obviously false. Working with A.I. at Tesla lets me say with confidence “that we’re headed toward a situation where A.I. is vastly smarter than humans and I think that time frame is less than five years from now. But that doesn’t mean that everything goes to hell in five years. It just means that things get unstable or weird.” Elon Musk

It is all fast emerging as a real and true or general AI, globally intelligent systems integrating all sorts of automated ML/DL/ANN models, algorithms, techniques and cloud platforms.

General AI with Automation and Robotics emerges as the genuine General Purpose Technology for all the previous ones: the power engine, railroad, electricity, electronics, mechanization, control theory (automation), the automobile, the computer, the Internet, nanotechnology, biotechnology, and machine learning.

HOW TO CREATE AN ARTIFICIAL INTELLIGENCE GENERAL TECHNOLOGY PLATFORM

Human and machine powers are most productively harnessed by designing hybrid human- machine superintelligence (HMSI) cyber-physical networks in which each party complements each other’s strengths and counterbalances each other’s weaknesses.

HMSI [RealAI] is all about 5 interrelated universes, as the key factors of global intelligent cyberspace:

  • reality/world/universe/nature/environment/spacetime, as the totality of all entities and relationships;
  • intelligence/intellect/mind/reasoning/understanding, as human minds and AI/ML models;
  • data/information/knowledge universe, as the world wide web; data points, data sets, big data, global data, world’s data; digital data, data types, structures, patterns and relationships; information space, information entities, common and scientific and technological knowledge;
  • software universe, as the web applications, application software and system software, source or machine codes, as AI/ML codes, programs, languages, libraries;
  • hardware universe, as the Internet, the IoT, CPUs, GPUs, AI/ML chips, digital platforms, supercomputers, quantum computers, cyber-physical networks, intelligent machinery and humans

How it is all represented, mapped, coded and processed in cyberspace/digital reality by computing machinery of any complexity, from smartphones to the internet of everything and beyond.

AI is the science and engineering of reality-mentality-virtuality [continuum] cyberspace, its nature, intelligent information entities, models, theories, algorithms, codes, architectures and applications.

Its subject is to develop the AI Cyberspace of physical, mental and digital worlds, the totality of any environments, physical, mental, digital or virtual, and application domains.

AI as a symbiotic hybrid human-machine superintelligence is to overrule the extant statistical narrow AI with its branches, such as machine learning, deep learning, machine vision, NLP, cognitive computing, etc.

FROM UNICODE TO UNIDATACODE: HOW TO INTEGRATE AI INTO ICT

Real AI of deep causal learning and understanding has the potential to enhance the accessibility of information & communications technology (ICT). 

It maybe the system software operating systems like Apple's iOS, Google's Android, Microsoft's Windows Phone, BlackBerry's BlackBerry 10, Samsung's/Linux Foundation's Tizen and Jolla's Sailfish OS; macOS, GNU/Linux, computational science software, game engines, industrial automation, and software as service applications.

It may be web browsers such as Internet Explorer, Chrome OS and Firefox OS for smartphones, tablet computers and smart TVs, cloud-based software or specialized classes of operating systems, such as embedded and real-time systems.

A Hierarchy of Abstraction Levels: A Layer on a Layer

Unicode_URI.png

Here is an heuristic rule, each complex problem in science and technology is decided by adding up a new abstraction level.

ICT with computer science, as computing, data storage and digital communication, deep neural networks, etc., is not any exclusion. It relates to the Internet stack, from computers to content, the OSI model and the Internet suite, with Domain Name System.

Internet Protocol Suite

  • Application layer
  • Transport layer
  • Internet layer
  • Link layer

OSI Model

1.  Physical layer

2.  Data link layer

3.  Network layer

4.  Transport layer

5.  Session layer

6.  Presentation layer

7.  Application layer

This is how data web or Semantic Web, an extension of the World Wide Web, was tried and failed by the World Wide Web Consortium (W3C) to make Internet data machine-readable.

The Semantic Web Stack, also known as Semantic Web Cake or Semantic Web Layer Cake, illustrates the architecture of the Semantic Web as the hierarchy of languages, where each layer exploits and uses capabilities of the layers below. The Semantic Web aims at converting the current web, dominated by unstructured and semi-structured documents into a "web of data".

To enable the encoding of semantics with the data, web data representation/ formatting technologies such as Resource Description Framework (RDF) and Web Ontology Language (OWL) are used.

W3C tried but failed to innovate a sort of universal dataset code layer over the universal character set code layer, or UNICODE.

Still, the whole idea to apply ontology [describing concepts, relationships between entities, and categories of things] was intuitively right.

Unicode and Unidatacode

Unicode is an information technology (IT) standard for the consistent encoding, representation, and handling of text expressed in most of the world's writing systems. The standard is maintained by the Unicode Consortium, and as of March 2020, there is a total of 143,859 characters, with Unicode 13.0 (these characters consist of 143,696 graphic characters and 163 format characters) covering 154 modern and historic scripts, as well as multiple symbol sets and emoji. The character repertoire of the Unicode Standard is synchronized with ISO/IEC 10646, and both are code-for-code identical.

The Universal Coded Character Set (UCS) is a standard set of characters defined by the International Standard ISO/IEC 10646, Information technology — Universal Coded Character Set (UCS) (plus amendments to that standard), which is the basis of many character encodings, improving as characters from previously unrepresented writing systems are added.

To integrate AI into computers and system software means to create a Unicode abstraction level, the Universal Coded Data Set (UCDS), as AI Unidatacode or EIS UCDS.

The Universal Coded Data Set (UCDS): AI Unidatacode or EIS UCDS

The UCDS is a universal set of data entities, which is the basis of all intelligent/meaningful data encodings, as machine data types, improving as data units, sets, types and points are added.

Data universe represents the state of the world, environment or domain.

Data refers to the fact that some dynamic state of the world is represented or coded in some form suitable for usage or processing.

Data are a set of values of entity variables, as places, persons or objects.

A data type is formally defined as a class of data with its representation and a set of operators manipulating these representations, or a set of values which a variable can possess and a set of functions that one can apply to these values.

Common data types include:

  • Integer
  • Floating-point number
  • Character/Unicode
  • String
  • Boolean

Almost all programming languages explicitly include the notion of data type, while using different terminology. The data type defines the syntactic operations that can be done on the data, the semantic meaning of the data, and the way values of that type can be stored, providing a set of values from which an expression (i.e. variable, function, etc.) may take its values.

Deep Intelligence and Learning: AI + Unidatacode + Machine DL

Deep Intelligence and Learning = AI [real-world competency and common sense knowledge and reasoning, domain models, causes, principles, rules and laws, data universe models and types] + UniDataCode [Unicode] + Machine DL [ a hierarchical level of artificial neural networks, neurons, synapses, weights, biases, and functions, feature/representation learning, training data sets, unstructured and unlabeled, algorithms and models, model-driven reinforcement learning]

Provided the rules of chess, AlphaZero learned to play the board games in 4 hours, or MuZero -building a model from first principles.

For a dynamic and unpredictable world, you need the AI-Environment interaction model of real intelligence, where AI acts upon the environment (virtual or physical; digital or natural) to change it. AI perceives these reactions to choose a rational course of action.

Every deep AI system must not only have some goals, specific or general-purpose, but efficiently interact with the world to be the Deep Intelligence and Learning.

What is Wrong with Today's AI Hardware, Its Chips and Platforms?

All the confusion comes from anthropomorphic Artificial Intelligence, AAI, the simulation of the human brain using artificial neural networks, as if they substitute for the biological neural networks in our brains. A neural network is made up of a bunch of neural nodes (functional units) which work together and can be called upon to execute a model.

Thus, the main purpose in 2021 is to provide a conceptual framework to define Machine Intelligence and Learning. And the first step to create MI is to understand its nature or concept against main research questions (why, what, who, when, where, how).

So, describe AI to people as an AAI or augmented intelligence or advanced statistics, not artificial intelligence or machine intelligence.

Now, are the levels of AAI applications, tools, and platforms?

Let’s focus only on "AAI chips", forming the brain of an AAI System, replacing CPUs and GPUs, and where most progress has to be achieved.

While typically GPUs are better than CPUs when it comes to AI processing, they usually fail, being specialized in computer graphics and image processing, not neural networks.

The AAI industry needs specialised processors to enable efficient processing of AAI applications, modelling and inference. As a result, chip designers are now working to create specialized processing units.

These come under many names, such as NPU, TPU, DPU, SPU etc., but a catchall term can be the AAI processing unit (AAI PU), forming the brain of an AAI System on a chip (SoC).

It is also added with 1. the neural processing unit or the matrix multiplication engine where the core operations of an AAI SoC are carried out; 2. Controller processors, based on RISC-V, ARM, or custom-logic instruction set architectures (ISA) to control and communicate with all the other blocks and the external processor; 3. SRAM; 4. I/O; 5. the interconnect fabric between the processors (AAI PU, controllers) and all the other modules on the SoC.

The AAI PU was created to execute ML algorithms, typically by operating on predictive models such as artificial neural networks. They are usually classified as either training or inference generally performed independently.

AAI PUs are generally required for the following:

  • Accelerate the computation of ML tasks by several folds (nearly 10K times) as compared to GPUs
  • Consume low power and improve resource utilization for ML tasks as compared to GPUs and CPUs

Unlike CPUs and GPUs, the design of single-action AAI SoC is far from mature.

Specialized AI chips deal with specialized ANNs, and are designed to do two things with them: task-designed training and inference, only for facial recognition, gesture recognition, natural language processing, image searching, spam filtering, etc.

In all, there are {Cloud, Edge, Inference, Training} chips for AAI models of specific tasks. Examples of Cloud + Training chips include NVIDIA’s DGX-2 system, which totals 2 petaFLOPS of processing power, made up of 16 NVIDIA V100 Tensor Core GPUs, or Intel Habana’s Gaudi chip or Facebook photos or Google translate.

Sample chips here include Qualcomm’s Cloud AI 100, which are large chips used for AAI in massive cloud data centres. Another example is Alibaba’s Huanguang 800, or Graphcore’s Colossus MK2 GC200 IPU.

Now (Cloud + Inference) chips were used to train Facebook’s photos or Google Translate, to process the data you input using the models these companies created. Other examples include AAI chatbots or most AAI-powered services run by large technology companies. Here is also Qualcomm’s Cloud AI 100, which are large chips used for AAI in massive cloud data centres, Alibaba’s Huanguang 800, or Graphcore’s Colossus MK2 GC200 IPU.

(Edge + Inference) on-device chips examples include Kneron’s own chips, including the KL520 and recently launched KL720 chip, which are lower-power, cost-efficient chips designed for on-device use; Intel Movidius and Google’s Coral TPU.

All of these different types of chips, training or inference, and their different implementations, models, and use cases are expected to develop the AAI of Things (AAIoT) future.

How to Make a True Artificial Intelligence Platform

In order to create a platform neutral software operating with world’s data/information/content which could run/display properly on any type of computer, cell phone, device or technology platform, the following are required:

  • Operating Systems.
  • Computing/Hardware/Cloud Platforms.
  • Database Platforms.
  • Storage Platforms.
  • Application Platforms.
  • Mobile Platforms.
  • Web Platforms.
  • Content Management Systems.

The AI programming language should act as both the general programming language and computing platform. Its applications could be launched on any operating system and hardware, from mobile-based operating systems, such as Linux or Android, to hardware-based platforms, from game consoles to supercomputers or quantum machines.

Cyberspace as a New Intelligent Reality

According to Britannica, Cyberspace is ... “virtual” world created by links between computers, Internet-enabled devices, servers, routers, and other components of the Internet’s infrastructure. The term cyberspace was first used by William Gibson in 1982 in his book Neuromancer, as the creation of a computer network in a world filled with artificially intelligent beings.

The infrastructure of cyberspace is now fundamental to the functioning of national and international security systems, trade networks, emergency services, basic communications, and other public and private activities. 

In all, there are many various definitions, as listed below:

  • Cyberspace as a digital reality
  • Cyberspace as a immersive virtual reality
  • Cyberspace as the flow of digital data through the network of interconnected computers
  • Cyberspace as the Internet as a whole and the WWW
  • Cyberspace as the environment of the Internet. .. The virtual space created by interconnected computers and computer networks on the Internet
  • Cyberspace as the electronic medium of computer networks for online communication
  • Cyberspace as a world of data/information/knowledge through the Internet
  • Cyberspace as a location for the free sharing of data, knowledge, ideas, culture, and community

Intelligent Cyberspace = Real AI + Cyberspace

CyberAI is all about 5 interrelated universes, as the key factors of global intelligent cyberspace:

  • Reality/world/universe/nature/environment/spacetime, as the totality of all entities and relationships;
  • Intelligence/intellect/mind/reasoning/understanding, as human minds and AI/ML models;
  • Data/information/knowledge universe, as the world wide web; data points, data sets, big data, global data, world’s data; digital data, data types, structures, patterns and relationships; information space, information entities, common and scientific and technological knowledge;
  • Software universe, as the web applications, application software and system software, source or machine codes, as AI/ML codes, programs, languages, libraries;
  • Hardware universe, as the Internet, the IoT, CPUs, GPUs, AI/ML chips, digital platforms, supercomputers, quantum computers, cyber-physical networks, intelligent machinery and humans

How it is all represented, mapped, coded and processed in cyberspace by computing machinery of any complexity, from smartphones to the internet of everything and beyond.

CyberAI is the science and engineering of reality-mentality-virtuality [continuum] cyberspace, its nature, intelligent information entities, models, theories, algorithms, codes, architectures, platforms, networks, and applications.

Its subject is to develop the AI Cyberspace of physical, mental and digital worlds, the totality of any environments, physical, mental, digital or virtual, and application domains.

CAI as a symbiotic hybrid human-machine superintelligence is to overrule the extant statistical narrow AI with its branches, such as machine learning, deep learning, machine vision, NLP, cognitive computing, etc.

Real AI vs. Fake AI: Beneficial AI vs Adversarial AI

A major concern with today's AI is the so-called Adversarial AI [AAI], the malicious development and use of advanced digital technology and systems that have the ability to learn from past experiences, to reason or discover meaning from complex data.

Such AAI machine learning systems are in need of high quality datasets to train their algorithms, and so liable to data poisoning attacks, whereby malicious users inject false training data corrupting the learned model..

AAI technology enables armies of killer robots and AI-enhanced cyber weapons.

Ultimately, the real danger of AAI lies in how it will enable cyber attackers.

Major critical national infrastructures — security services, telecommunications, electric grids, dams, wastewater, and critical manufacturing — are vulnerable to physical damage from AAI-enhanced cyber attacks.

How AI Machine Learning Techniques Are Leading to Skynet

AI-enhanced cyber attacks against nuclear systems would be almost impossible to detect and authenticate, let alone attribute, within the short timeframe for initiating a nuclear strike. According to open sources, operators at the North American Aerospace Defense Command have less than three minutes to assess and confirm initial indications from early-warning systems of an incoming attack. This compressed decision-making timeframe could put political leaders under intense pressure to make a decision to escalate during a crisis, with incomplete (and possibly false) information about a situation.

Ironically, new technologies designed to enhance information, such as 5G networks, machine learning, big-data analytics, and quantum computing, can also undermine its clear and reliable flow and communication, which is critical for effective deterrence.

Advances in AI could also exacerbate this cyber security challenge by enabling improvements to cyber offense. 

AI applications designed to enhance cyber security for nuclear forces could simultaneously make cyber-dependent nuclear weapon systems (e.g., communications, data processing, or early-warning sensors) more vulnerable to cyber attacks.

AI machine learning techniques might also exacerbate the escalation risks by manipulating the digital information landscape, where decisions about the use of nuclear weapons are made.

Building Rational Learning Human-AI Systems: Intelligent Human=Machine Robotics

 Robotics can be defined as 5Es, embrained, embodied, embedded, enacted and extended, AI, or “Real AI in action in the physical world”. Such a real robot is a cyber-physical machine that has to cope with the dynamics, the uncertainties and the complexity of the physical and virtual world.

Knowledge, learning, perception, reasoning, decision, action, as well as interaction capabilities with the environment and other intelligent systems must be integrated in the control/governance architecture of the intelligent robotic system. Some examples of robots include robotic manipulators, autonomous vehicles (e.g. cars, drones, flying taxis), humanoid robots, military drones, robotic process automation, etc.

INDEPENDENT HIGH-LEVEL EXPERT GROUP ON ARTIFICIAL INTELLIGENCE SET UP BY THE EUROPEAN COMMISSION has proposes to use the following updated definition of AI to build rational deep learning systems: “Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions.

As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems).”

The human-machine intelligence and learning systems act/operate through the perception-learning-action-perception causal cycle, as a continuous flow of information and action among the brain, the body, the robot and the world, all by sensing, thinking, predicting, deciding, acting, and adjusting or innovating.

Any powerful brain makes sense of the world around it by creating and testing hypotheses about the way the world works.

When presented with new situations, the intelligent agent makes predictions based on past experiences, takes action based on those hypotheses, perceives the results and re-adjusts its hypotheses by reinforcing or refuting. It is all under the control of intelligence, its world’s models, schemas and algorithms.

Mind.png

The perception-learning-action-perception cycle as a feedback loop helping us build an understanding of how the world works: the circular cybernetic flow of cognitive information that links the organism to its environment

There are the whole institutions, such as Max Plank Institution for Intelligent Systems, whose goal is “to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future artificially intelligent systems. The Institute studies these principles in biological, computational, hybrid, and material systems ranging from nano to macro scales''.

In all, the Human-AI systems are aimed to investigate and understand the organizing principles of intelligent systems and the underlying perception-action-learning causal loops.

Who will be the dominant leader in AI in 10 years?

The A-list looks as in:

  • AI4EE, or AI itself (Global Human-AI Platform)
  • Big Tech (the Tech Giants or the Big Nine, the largest and most dominant companies in the information technology industry, G-MAFIA and BAT-Triada)
  • China (China announced in 2017 that it wants to lead the world in AI by 2030, strategically allocating funds guided by a national strategy for AI)
  • USA (The United States' substantial investments in Narrow AI is aimed to extend its role as a global superpower)
  • Russia (“Whoever becomes the leader in AI [or artificial intelligence] will become the ruler of the world,” Vladimir Putin)

AI dominance can take on 3 forms, as to WEF.

"First, programmed AI that humans design in detail with a particular function in mind, like (most) manufacturing robots, virtual travel agents and Excel sheet functions. Second, statistical AI that learns to design itself given a particular predefined function or goal. Like humans, these systems are not designed in detail and also like humans, they can make decisions but they do not necessarily have the capability to explain why they made those decisions.

The third manifestation is AI-for-itself: a system that can act autonomously, responsibly, in a trustworthy style, and may very well be conscious, or not. We don’t know, because such a system does not yet exist".

In the case of AI itself, it is a real and true AI, designed, developed, deployed and distributed as a global AI Platform.

As to the possible corporate leadership, it is a small chance that any of today's top AI companies could be a leader due to the limiting business strategy of Narrow and Weak AI/ML/DL, Top Performing Artificial Intelligence Companies of 2021.

Now, many countries have been developing their national Narrow AI strategies, including China, the US, the EU, and Russia.

Global_AI_Strategy_Landscape.png


Some big powers, such as the US, China or Russia, trick themselves to believe in reaching a leadership in the global narrow AI arms race. This was largely provoked by Russian President ignorantly claiming, "whoever becomes the leader in this sphere will become the ruler of the world."

Beside the AI arms race, some general trends in the domain of Narrow AI/ML/DL are highlighted in the AI Index Report 2021.

Overall, the world is entering a new era, with real artificial intelligence taking center stage disrupting the narrow/weak AI of Machine Learning, Deep Learning or Deep Neural Networks.

[Real] AI is going to change all the world from bottom to top more than anything else in the history of mankind. More than any general purpose technologies, from language and writing, steam power, electricity, nuclear power, IT and computing.

It is impacting the future of virtually every part of human life, government, education, work, industry and every human being. Real AI is to act as the main driver of emerging technologies like big data, robotics and IoT, nano-, bio-, neuro-, or cognitive technologies, and it will continue to act as the integrating GPT for the foreseeable future.

Conclusion

The truth is that a few of us use “AI” in the right context. Confusing, misusing and misunderstanding the Real AI vs. Non-Real AI can cause us to create false expectations and predictions, with fallacious statements and assumptions about what the future holds. Due to the global current emergency, the world is to change radically, so thinking critically about the current state of AI technology is crucial if we want to thrive in the future.

To adapt and progress in a world driven by accelerated change, understand the implications of AI on human society, and know where we stand today, we need to first distinguish between two polar types of AI: Real/True/Global AI vs. Non-Real/False/Narrow/Weak/Human-like/Superhuman AI.

And the Real AI4EE Technology has all the power to transform the world and upend all old human institutions. Its socio-political and economic impact could be more than the first nuclear bomb, the Sputnik 1957, and the first man in space 1961, all taken together.

The world is entering a new era, with real artificial intelligence taking center stage.

[Real] AI is going to change the world from bottom to top more than anything else in the history of mankind. More than any general purpose technologies, from language and writing, steam power, electricity, nuclear power, IT and computing.

It is impacting the future of virtually every part of human life, government, education, work, industry and every human being. Real AI is to act as the main driver of emerging technologies like big data, robotics and IoT, nano-, bio-, neuro-, or cognitive technologies, and it will continue to act as the integrating GPT for the foreseeable future.

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • No comments found

Share this article

Azamat Abdoullaev

Tech Expert

Azamat Abdoullaev is a leading ontologist and theoretical physicist who introduced a universal world model as a standard ontology/semantics for human beings and computing machines. He holds a Ph.D. in mathematics and theoretical physics. 

   

Latest Articles

View all
  • Science
  • Technology
  • Companies
  • Environment
  • Global Economy
  • Finance
  • Politics
  • Society