Why Today's Artificial Intelligence is Different?

Why Today's Artificial Intelligence is Different?

Why Today's Artificial Intelligence is Different?

Why is today's narrow artificial intelligence (AI) not real?

Most of the machine learning, deep learning algorithms and models are heavily relying on the statistical learning theory instead of causal learning, thus predicting spurious correlations instead of meaningful causation. This makes a critical difference for the whole enterprise, its applications, prospects, and impacts on every part of human life. 

We have to be intelligently critical and fully objective as modern science demands it, and as far as it concerns all of us and our human future.

The AI world has been flooded with a series of gigantic language model projects promoted as the last word in AI.

First, OpenAI shocked the world a year ago with GPT-3. In turn, Google presented LaMDA and MUM, two narrow AIs as revolutionizing chatbots and the search engine. And now the Beijing Academy of Artificial Intelligence (BAAI) conference presented Wu Dao 2.0.


Artificial Intelligence has been on everyone's minds today, and there is not a news without hearing about it in the mass media. However, what people mean when they think or talk or write or do about AI is full of confusion and cognitive biases.

Defining AI is a devil's task as there is no commonly agreed definition we can refer to, be it laymen, journalists, politicians, businessmen or AI researchers, developers or engineers.

Things started in the wrong way from its very legal inception and conception.

Alan Turing pioneered an anthropocentric approach in his famous article: Computing Machinery and Intelligence (Mind, 1950), where he proposed the imitation game, known now as the Turing's Test, and a false problem "If machines can think like a human being". Later the term AI was coined by Professor John McCarthy in 1956 who was debating with a group of computer scientists whether a computer could think and imitate human-like intelligence. Three years later, Arthur Samuel, built a computer program to play checkers, and invented the term “machine learning”: "the field of study that gives computers the ability to learn without being explicitly programmed".

These three assumptions made the mainstream an Anthropomorphic and Anthropocentric AI (AAAI) as "referring to the simulation of human intelligence in machines that are programmed to think like humans and mimic their actions".

The field of AAAI has significantly blossomed in recent 10 years due to the lucky convergence of three digital technology forces. The advances of mathematical tools and statistical models, machine learning and deep learning, and computing memory, power and hardware, all combined with the explosion of digital data, such as text, images, video and speech. 

If given enough sample/training data, ML/DL algorithms can predict, personalise, recognise and uncover statistical data patterns to provide insights or identity anomalies, by interpolating or extrapolating its learning functions/models.

Such breakthroughs have led to many useful applications as digital assistants, robotic processing automation systems, conversational, speech recognition and natural language processing software bots, gaming machines, recommendation engines, social media personalisation algorithms, or self-driving cars. This might make people think that AI is reaching some super intelligence level. But in reality, AI today is “narrow” or “weak”, dull and dumb, with no knowledge and intelligence, understanding or reasoning, learning or cognition.

As a result, the AAAI technology fails, hyped beyond expectations, and used in unexpected ways, making bad algorithmic decisions threatening human life, rights and safety.

It is like a flawed algorithm using voice recognition system to detect immigration fraud led the UK to deport thousands of students in error leading them to lose homes, jobs and futures or IBM Watson recommended ‘unsafe and incorrect’ cancer treatment 

Some compare ML/DL algorithms to the idiots of ants which can be super intelligent on one very, very narrow task, like diagnosing lung cancer better than a doctorate radiologist with a PhD which feels particularly hyper-intelligent because it is not layman knowledge (Hume, 2018). The biggest weakness in ML/DL machines is their need of millions of tagged data to differentiate a cat and a dog from what a two-year-old would probably learn after one or two times short learning.

There are a lot of general articles, philosophical articles, scientific articles or special articles, which are mostly 99.99% about the NAI R&D.

The number of NAI journals is about two dozens, from Minds and Machines to AI to Nature MI to AI Magazine to AI & Society.

The main issue is the NAI articles are overspecialized, and so there are a virtually unlimited number of special fragmented issues, as listed in AI Journal:

cognition and AI, automated reasoning and inference, case-based reasoning, commonsense reasoning, computer vision, constraint processing, ethical AI, heuristic search, human interfaces, intelligent robotics, knowledge representation, machine learning, multi-agent systems, natural language processing, planning and action, and reasoning under uncertainty.

It is not surprising that "some experts in AI think its name fuels confusion and hype of the sort that led to past ‘AI winters’ of disappointment". [Why Artificial Intelligence Isn’t Intelligent]

What is Real AI?

There are two classes of Machine Intelligence, AI and Non-AI, or AI and ML/DL, Non-Real AI and Real AI, or AI vs Global AI vs. Narrow AI, Artificial General AI and Artificial Superhuman Intelligence.

Non-Real AI refers to the simulation of human intelligence in machines that are programmed to think and act like humans.

The Real AI is all about reality, mentality and causality, and how it is reflected as digital mentality, or machine intelligence, in cyberspace or virtuality, or mixed reality.

Its key domains as interacting universes are:

  • Actuality (the Physical World, the Universe, the total environment; philosophy, ontology, science, mathematics and technology)
  • Mentality (Mental or Counterfactual World; cognitive science, psychology, neuroscience)
  • Virtuality (Digital Data Universe/Virtual World/Mixed Reality; cybernetics, computer science, AI, ML, DL, data science, data analytics, information engineering).

The Real AI is running causal algos or algorithms as sets of causal rules for solving any complex real world problems or accomplishing tasks.

As such, the Real AI emerges as a global AI platform embracing all sorts and descriptions of Non-Real AI:

Narrow and Weak AI, ML, DL (Deep Neural Networks). It all emulates, mimics, simulates, counterfeits, or fakes synapse-connected brain neurons, some cognitive functions/skills/capacities or some intelligent behavior as running on graphics processing units (GPUs) or processors specialized for AI functions.

Strong or General AI; It is conceived as a generally intelligent system that can act and think much like humans, but at the speed of the fastest computer systems. It should also have consciousness, thoughts, self-awareness, sentience, and sapience. As Geoffrey Hinton noted

: “There are one trillion synapses in a cubic centimeter of the brain. If there is such a thing as general AI, [the system] would probably require one trillion synapses.”

Superhuman AI, or Artificial Superintelligence. "An intellect that is much smarter than the best human brain in practically every field, including scientific creativity, general wisdom and social skills". The creation of superintelligence, according to some, could result in disaster for humanity, possibly even extinction.

Many high-profile figures warn about ASI as being disastrous on a global scale. Tesla and SpaceX CEO Elon Musk has predicted dire consequences, claiming that ASI is potentially more dangerous than nuclear warheads, and has frequently called for greater regulatory oversight on the development of superintelligence. “The biggest issue I see with so-called AI experts is that they think they know more than they do, and they think they are smarter than they actually are.” “This tends to plague smart people. They define themselves by their intelligence, and they don’t like the idea that a machine could be way smarter than them, so they discount the idea—which is fundamentally flawed.”

Radically enhanced human brains could be achievable through the convergence of genetic engineering, nanotechnology, information technology, and cognitive science, or whole brain emulation.

Or, more real, a synergetic human-machine superintelligence is likely to come about through advances in the Real/Global AI, computer science, cognitive science, and brain digitization, or BCI/MMI/NCI/BMI, with adding up a digital superintelligence neocortex, like Neuralink is promising to deliver

What's Wrong With Today's Artificial Intelligence?

Today’s artificial intelligence (AI) systems inductively “learn” from selected training data, as from experience, observations and trial and error, as if to acquire knowledge on their own, and this is known as machine learning. 

Standard machine and deep learning algorithms extract data patterns as statistical correlations from raw data, structured or unstructured. This is true for simple algorithms, like logistic regression, or sophisticated algorithms, like neural networks, which can learn deeper patterns from input data. 

But there is a big BUT, which is a fatal flaw in the whole enterprise. Today's AI is largely machine learning techniques, deep learning algorithms and deep neural networks can’t identify causality, its elements and structures, processes and mechanisms, rules and relationships, data and models, all what makes our world.

This leads to all sorts of decision and prediction errors, data and algorithmic biases, the lack of quality data, and implementation failings, or the absence of real machine intelligence and learning.

Integrating the symbolic AI with the statistical AI of Machine Learning, Causal Machine Intelligence and Learning makes the next generation of powerful intelligent machines, running the master [causal] learning algorithms. It allows intelligent machines to think about the world, factual and counterfactual, computing its alternatives and scenarios, and effectively interacting with any complex environments, physical or virtual.

What is the Principal Difference between Real AI and Non-Real AI?

It is like five contemporary machine-learning paradigms, evolutionary algorithms, connectionism and neural networks, symbolism, Bayes networks, and analogical reasoning, vs. one AI “master algorithm” capable of learning potentially anything.


From The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. P. Domingo

The Master AI knows what you search on Google, buy on Amazon, listen to on Apple Music, watch on Netflix, share on Facebook, and all your sentiments, preferences and choices and much more.


Generally, there are two broad types of Machine Intelligence:

General [Scientific or Real] AI vs. Narrow {Statistic or Symbolic} AI.

The real/general/causal AI is the only True AI, being all about reality, mentality and virtuality, or digital reality, relying on world models and substantial causality instead of spurious correlations or special knowledge.


What is real intelligence? What is natural intelligence and artificial intelligence and how are they different from each other?



https://www.bbntimes.com/science/wu-dao-2-0-why-china-is-leading-the-artificial-intelligence-race HOW TO CREATE AN ARTIFICIAL INTELLIGENCE GENERAL TECHNOLOGY PLATFORM

https://www.bbntimes.com/technology/how-to-create-an-artificial-intelligence-general-technology-platform MATHEMATICS OF MACHINE LEARNING: DATA, ALGORITHM, MODEL, AND CAUSAL LEARNING

https://www.bbntimes.com/science/mathematics-of-machine-learning-data-algorithm-model-and-causal-learning On Global AI, or what are the three domains of AI?WHAT IS THE DIFFERENCE BETWEEN THE LEARNING CURVE OF MACHINE LEARNING AND ARTIFICIAL INTELLIGENCE?

https://www.bbntimes.com/science/what-is-the-difference-between-the-learning-curve-of-machine-learning-and-artificial-intelligence Intelligent Governments and European Green Deal: Towards Smart Green Europe


“Human + Machine, Reimagining Work in the Age of AI”, Paul R. Daugherty and H. James Wilson

AI today: definition, use cases, risks, and unexpected consequences on society




Share this article

Leave your comments

Post comment as a guest

terms and condition.
  • No comments found

Share this article

Azamat Abdoullaev

Tech Expert

Azamat Abdoullaev is a leading ontologist and theoretical physicist who introduced a universal world model as a standard ontology/semantics for human beings and computing machines. He holds a Ph.D. in mathematics and theoretical physics. 

Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics