What is the State-of-the-Art & Future of Artificial Intelligence?

What is the State-of-the-Art & Future of Artificial Intelligence?

What is the State-of-the-Art & Future of Artificial Intelligence?
Artificial intelligence (AI) is providing organizations, researchers and governments new tools that are capable of achieving major goals.
There are numerous directions that have already been taken and some that have still yet to be taken. AI is carrying out complex operations in less time and with less effort.

The Pros and Cons of Artificial Intelligence

AI_Past_Present_and_Future.jpg

In 1958, the New York Times reported on a demonstration by the US Navy of Frank Rosenblatt’s “perceptron” (a rudimentary precursor to today’s deep neural networks): “The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself, and be conscious of its existence”. This optimistic take was quickly followed by similar proclamations from AI pioneers, this time about the promise of logic-based “symbolic” AI.

In 1960 Herbert Simon declared that, “Machines will be capable, within twenty years, of doing any work that a man can do”. The following year, Claude Shannon echoed this prediction: “I confidently expect that within a matter of 10 or 15 years, something will emerge from the laboratory which is not too far from the robot of science fiction fame”. And a few years later Marvin Minsky predicted that, “Within a generation...the problems of creating ‘artificial intelligence’ will be substantially solved”.

John McCarthy promoted the term Artificial Intelligence with a wishful thinking that, ‘Every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions, and concepts, solve the kinds of problems now reserved for humans, and improve themselves.’

AI was assumed to simulate human reasoning, giving the ability of a computer program to learn and think. Everything is AI if it involves a "program doing something that we would normally think would rely on the intelligence of a human".

Such man-made anthropomorphic intelligence has been developed as simple rule-based systems, symbolic logical general AI and/or sub-symbolic statistical narrow AI. Such an AI is an automation technology that emulates human performance, typically by learning from it, to enhance existing applications and processes.

Such a human brain-mind-behavior mimicking AI marked by overconfident predictions about AI from its inception resulting in a series of AI Springs and Winters.

In AI, it is popular the concept of an intelligent agent (IA), which refers to an autonomous entity which acts, directing its activity towards achieving goals (i.e. it is an agent), upon an environment using observation through sensors and consequent actuators (i.e. it is intelligent). Intelligent agents may also learn or use knowledge to achieve their goals. They may be very simple or very complex. A reflex machine, such as a thermostat, is considered an example of an intelligent agent.

It is misleadingly considered that a goal-directed behavior is to be the essence of intelligence, formalized as an "objective function", negative or positive, a "reward function", a "fitness function", a "loss function", a "cost function", a "profit function", a "utility function", all mapping an event [or values of one or more variables] onto a real number intuitively representing some "cost" or "loss" or "reward".

What are Machine Learning, Deep Learning and Artificial Neural Networks ?

Machine learning (ML) is a type of artificial intelligence (AI), viewed as a tabula rasa, having no innate knowledge, data patterns, rules or concepts, and which algorithms use historical data as input to predict new output values. Its most advanced deep learning techniques assume a “blank slate” state, and that all specific narrow “intelligence” can be rote learned from some training data. Meantime, a newly-born mammal starts life with a level of built-in inherited knowledge-experience. It stands within minutes, knows how to feed almost immediately, and walks or runs within hours.

Again, the entire advance in deep learning is enabled by the inverse process of backpropagation algorithm, which allows large and complex neural networks to learn from training data. Hinton, Rumelhart, and Williams, published “Learning representations by back-propagating errors” in 1986. It took another 26 years before an increase in computing power, GPUs enabling the complex calculations required by backpropagation algorithms to be applied in parallel and the growth in “big data” enabling the use of that discovery at the scale seen today. 

Deep learning employs neural networks of various architectures and structures to train a model to understand the linear and non-linear relationship between input data and output data. Non-linearity is a key differentiator from traditional machine learning models which are mostly linear in nature.

To quote machine learning pioneer Stuart Russell: “I don’t think deep learning evolves into AGI. Artificial General Intelligence is not going to be reached by just having bigger deep learning networks and more data…

Deep learning systems don’t know anything, they can’t reason, and they can’t accumulate knowledge, they can’t apply what they learned in one context to solve problems in another context etc. And these are just elementary things that humans do all the time.”

Machine learning capabilities are narrow. An ML algorithm may be able to achieve better-than-human performance but only on exceedingly specific tasks and only after immensely expensive training.

Recommendation engines, fraud detection, spam filtering, malware threat detection, business process automation (BPA), predictive maintenance, customer behavior analytics, and business operational patterns are common use cases for ML.

Many of today's leading companies, such as Apple, Microsoft, Facebook, Google, Amazon, Netflix, Uber, Alibaba, Baidu, Tencent, etc., make Deep ML a central part of their operations.

Deep learning is a type of machine learning and artificial intelligence that imitates the human brain. Deep learning is a core of data science, which includes statistics and predictive modeling. At its simplest, deep learning is a way to automate predictive analytics, which algorithms are stacked in a hierarchy of increasing complexity and abstraction. Unable to understand the concept of things, as feature sets, a computer program that uses DL algorithms should be shown a training set and sort through millions of videos, audios, words or images, patterns of data/pixels in the digital data, to label/classify the things.

Deep learning neural networks are an advanced machine learning algorithm, known as an artificial neural network, underpinning most deep learning models, also referred to as deep neural learning or deep neural networking.

DNNs come in many architectures, recurrent neural networks, convolutional neural networks, artificial neural networks and feedforward neural networks, etc., each has benefits for specific use cases. All function in similar ways -- by feeding data in and letting the model figure out for itself whether it has made the right interpretation or decision about given data sets. Neural networks involve a trial-and-error process, so they need massive amounts of data on which to train, but deep learning models can't train on unstructured data. Most of the data humans and machines create, 80% is unstructured and unlabeled.

Today's AAI is over-focused on specific behavioral traits [BEHAVIORISM] or cognitive functions/skills/capacities/capabilities [MENTALISM], specific problem-solving in specific domains and there is still a very long way for such an AI to go in the future.

The key problem with AAI is “Will It Ever Be Possible to Create a General AI with Sentience and Conscience and Consciousness?’.

And its methodological mistake that AI starts blank, acquiring knowledge as the outside world is impressed upon it. What is known as the tabula rasa theory that any intelligence, natural or artificial, born or created "without built-in mental content, and therefore all knowledge about reality comes from experience or perception" (Aristotle, Zeno of Citium, Avicenna, Lock, psychologists and neurobiologists, and AAI/ML/DNN researchers).

Ibn Sina argued that the human intellect at birth resembled a tabula rasa, a pure potentiality that is actualized through education and comes to know". In Locke's philosophy, at birth the (human) mind is a "blank slate" without rules for processing data, and that data is added and rules for processing are formed solely by one's sensory experiences.

It was presented as "A New Direction in AI. Toward a Computational Theory of Perceptions" [Zadeh (2001)]. Here "at the moment when the senses of the outside world of AI-system begin to function, the world of physical objects and phenomena begins to exist for this intelligent system. However, at the same moment of time the world of mental objects and events of this AI-system is empty. This may be considered as one of the tenets of the model of perception. At the moment of the opening of perception channels the AI-system undergoes a kind of information blow. Sensors start sending the stream of data into the system, and this data has not any mapping in the personal world of mental objects which the AI-system has at this time". 

Subjective Reality and Strong Artificial Intelligence

Here is the mother of all mistakes: "Sensors start sending the stream of data into the system, and this data has ALL mapping in the personal world of mental objects which the AI-system has at this time".

Or, at very birth/creation the (human) mind or AI has the prior/innate/encoded/programmed knowledge of reality with causative rules for processing world's data, and an endless new stream of data is added to be processed by innate master algorithms.

In fact, the tabula rasa model of AI has been refuted by some successful ML game-playing systems as AlphaZero, a computer program developed by DeepMind to master the games of chess, shogi and go. It achieved superhuman performance in various board games using self-play and tabula rasa reinforcement learning, meaning it had no access to human games or hard-coded human knowledge about either game, BUT being given the rules of the games.

Again, Reinforcement learning is the problem faced by an agent that learns behavior through trial-and-error interactions with a dynamic environment. With all varieties, associative, deep, safe, or inverse, it differs from supervised learning in not needing labelled input/output pairs be presented, and in not needing sub-optimal actions to be explicitly corrected, focusing on finding a balance between exploration (of uncharted territory) and exploitation (of current knowledge).

All in all, the real AI model covers abstract intelligent agents (AIA) paradigm with the tabula rasa model of AI, with all their real world implementations as computer systems, biological systems, human minds, superminds [organizations], or autonomous intelligent agents. 

In this rational-action-goal paradigm, an IA possesses an internal "model" of its environment, encapsulating all the agent's beliefs about the world.

What is Natural Intelligence?

It is the intelligence created by nature, natural evolutionary mechanisms, as biological intelligence embodied as the brain, animal and human and any hypothetical alien intelligence. What is real intelligence? What is natural intelligence and artificial intelligence and how are they different from each other?

So, what are true real and genuine AI and fake false and fictitious AI?

When we all hear about artificial intelligence, the first thing to think of is human-like machines and robots, from Frankenstein's creature to Skynet's synthetic intelligence, that wreak havoc on humans and Earth. And many people still think it as a dystopian sci-fi far away from the truth.

Humanity has two polar future worlds to be taken over by two polar AIs.

The first, mainstream paradigm is a AAI (Artificial, Anthropic, Applied, Automated, Weak and Narrow "Black Box" AI), which is promoted by the big tech IT companies and an army of AAI researchers and scientists, developers and technologists, businessmen and investors, as well as all sorts of politicians and think tankers. Such an AI is based on the principle that human intelligence can be defined in a way that a machine can easily mimic it and execute tasks, from the most simple to most complex.

An example is Nvidia-like AAI, which has recently released new Fake AI/ML/DL software, hardware.

Conversational FAI framework

Real-time natural language understanding will transform how we interact with intelligent machines and applications

The framework enables developers to use pre trained deep learning models and software tools to create conversational AI services, such as advanced translation or transcription models, specialized chatbots and digital assistants.

FAI supercomputing. Nvidia introduced a new version of DGX SuperPod, Nvidia's supercomputing system that the vendor bills as made for AI workloads.

The new cloud-native DGX SuperPod uses Nvidia's Bluefield-2 DPUs, unveiled last year, providing users with a more secure connection to their data. Like Nvidia's other DGX SuperPods, this new supercomputer will contain at least 20 Nvidia DGX A100 systems and Nvidia InfiniBand HDR networking.

AI chipmakers can support three key AI workloads: data management, training and inferencing.

NVIDIA delivers GPU acceleration everywhere—to data centers, desktops, laptops, and the world’s fastest supercomputers. NVIDIA GPU deep learning is available on services from Amazon, Google, IBM, Microsoft, and many others.

The tech giants -- Google, Microsoft, Amazon, Apple and Facebook -- have also created chips made for Fake AI/ML/DL, but these are intended for their own specific applications.

Real AI vs. Fake AI

“I have always been convinced that the only way to get artificial intelligence to work is to do the computation in a way similar to the human brain. That is the goal I have been pursuing. We are making progress, though we still have lots to learn about how the brain actually works.” — Geoffrey Hinton

“My view is: throw it all away and start again.” — Geoffrey Hinton

Why so? A badly wrong assumption AI = ML = DL = Human Brain = AGI = Strong AI

Real AI vs. Artificial General Intelligence (AGI or Strong AI)

How Far Are We From Achieving Artificial General Intelligence? or How can we determine if computers have acquired general intelligence? Such-like questions are becoming a main topic today.

It was first introduced as the “imitation game”, later known as the “Turing test”, in the article “Computing Machinery and Intelligence” (Turing, 1950). 

The Turing test is restricted to a blind language communication, missing causal relationships; for computers to pass this test, it'd handle causal knowledge. A basic requirement to pass the Turing test is that the computer is able to handle causal things, be able to intervene in the world.

Headlines sounding the alarms that artificial intelligence (AI) will lead humanity to a dystopian future seem to be everywhere. Prominent thought leaders, from Silicon Valley figures to legendary scientists, have warned that should AI evolve into artificial general intelligence (AGI)—AI that is as capable of learning intellectual tasks as humans are—civilization will be under serious threat.

Artificial General Intelligence (AGI or Strong AI), "the hypothetical ability of an intelligent agent to understand or learn any intellectual task that a human being can", is a acausal construct, unreal and, therefore, non-achievable.

Instead of modeling the world itself, AI systems, as artificial general intelligence systems, are wrongly designed with the human brain as their reference, of which we know little, if nothing.

Then the experts' predictions, like as artificial intelligence to be achieved as early as by 2030 or emergence of AGI or the singularity by the year 2060, have no any use, sense or value.

McKinsey has even issued "An executive primer on artificial general intelligence with the annotation": "While human-like artificial general intelligence may not be imminent, substantial advances may be possible in the coming years. Executives can prepare by recognizing the early signs of progress".

Again, there is only one valid form of AI, Artificial Real Intelligence, or real AI having a consistent, coherent, comprehensive causal model of reality.

Real AI simulates/models/represents/maps/understands the world of reality, as objective and subjective worlds, digital reality and mixed realities, its cause and effect relationships, to effectively interact with any environments, physical, mental or virtual.

It overrules the fragmentary models of AI, such as narrow and weak AI vs. strong and general AI, statistical ML/DL vs. symbolic logical AI.

The true paradigm is a RAI (Real Causal Explainable and "White Box" AI), or the real, true, genuine and autonomous cybernetic intelligence vs the extant fake, false and fictitious anthropomorphic intelligence.

An example of Real Intelligence is a hybrid human-AI Technology, Man-Machine Superintelligence, which is mostly unknown to the big tech and AAI army. Its goal is to mimic reality and mentality, human cognitive skills and functions, capacities and capabilities and activities, in computing machinery.

The Man-Machine Superintelligence as [RealAI] is all about 5 interrelated universes, as the key factors of global intelligent cyberspace:

  • reality/world/universe/nature/environment/spacetime, as the totality of all entities and relationships;
  • intelligence/intellect/mind/reasoning/understanding, as human minds and AI/ML models;
  • data/information/knowledge universe, as the world wide web; data points, data sets, big data, global data, world’s data; digital data, data types, structures, patterns and relationships; information space, information entities, common and scientific and technological knowledge;
  • software universe, as the web applications, application software and system software, source or machine codes, as AI/ML codes, programs, languages, libraries;
  • hardware universe, as the Internet, the IoT, CPUs, GPUs, AI/ML chips, digital platforms, supercomputers, quantum computers, cyber-physical networks, intelligent machinery and humans

How it is all represented, mapped, coded and processed in cyberspace/digital reality by computing machinery of any complexity, from smartphones to the internet of everything and beyond.

AI is the science and engineering of reality-mentality-virtuality [continuum] cyberspace, its nature, intelligent information entities, models, theories, algorithms, codes, architectures and applications.

Its subject is to develop the AI Cyberspace of physical, mental and digital worlds, the totality of any environments, physical, mental, digital or virtual, and application domains.

RAI is to represent and model, measure and compute, analyse, interpret, describe, predict, and prescribe, processing unlimited amounts of big data, transforming unstructured data (e.g. text, voice, etc.) to structured data ( e.g. categories, words, numbers, etc. ), discovering generalized laws and causal rules for the future.

RAI as a symbiotic hybrid human-machine superintelligence is to overrule the extant statistical narrow AI with its branches, such as machine learning, deep learning, machine vision, NLP, cognitive computing, etc.

AAI: Opportunities and Risks

Unlike RAI, AAI presents an existential danger to humanity if it progresses as it is, as specialized superhuman automated machine learning systems, from task-specific cognitive robots to professional bots to self-driving autonomous transport.

The list of those who have pointed to the risks of AI numbers such luminaries as Alan Turing, Norbert Wiener, I.J. Good, Marvin Minsky, Elon Musk, Professor Stephen Hawking and even Microsoft co-founder Bill Gates.

“Mark my words, AI is far more dangerous than nukes.” With artificial intelligence we are summoning the demon.Tesla and SpaceX founder Elon Musk 

“Unless we learn how to prepare for, and avoid, the potential risks, “AI could be the worst event in the history of our civilization.” “The development of full artificial intelligence could spell the end of the human race….It would take off on its own, and re-design itself at an ever increasing rate." "Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded." the late physicist Stephen Hawking

“Computers are going to take over from humans, no question...Will we be the gods? Will we be the family pets? Or will we be ants that get stepped on?” Apple co-founder Steve Wozniak

“I am in the camp that is concerned about super intelligence,” Bill Gates

AI has the ability to “circulate tendentious opinions and false data that could poison public debates and even manipulate the opinions of millions of people, to the point of endangering the very institutions that guarantee peaceful civil coexistence.” Pope Francis, “The Common Good in the Digital Age”

A human-like and human-level AAI is a foundationally and fundamentally wrong idea, even more harmful than human cloning. Ex Machina’s Ava is a good lesson NOT to copy the biological brain/human intelligence.

A humanoid robot with human-level artificial general intelligence (AGI) could be of 3 types:

  • The human-like zombie mindless AI reaching human-level intelligence using some combination of brute force search techniques and machine learning with big data, exploiting senses and computational capacity unavailable to humans.
  • The human-like mindful AI mimicking exactly the neural processing in the human brain, with a whole brain emulation: "copying the brain of a specific individual – scanning its structure in nanoscopic detail, replicating its physical behaviour in an artificial substrate, and embodying the result in a humanoid form".
  • The superintelligent AI exceeds humans in all respects.

In all 3 scenarios, humans are doomed to occupy a subordinate position in the space of possible minds as pictured by the H-C plane [the human-likeness and capacity for consciousness of various real entities, both natural and artificial] unless to go for the ‘Void of Inscrutability’, a human-unlike real man-AI superintelligence.

RISKS OF NON-REAL ARTIFICIAL INTELLIGENCE

  • AI, Robotics and Automation-spurred massive job loss.
  • Privacy violations, a handful of big tech AI companies control billions of minds every day manipulating people’s attention, opinions, emotions, decisions, and behaviors with personalized information.
  • 'Deepfakes'
  • Algorithmic bias caused by bad data.
  • Socioeconomic inequality.
  • Weapons automatization, AI-powered weaponry, a global AI arms race .
  • Malicious use of AI, threatening digital security (e.g. through criminals training machines to hack or socially engineer victims at human or superhuman levels of performance), physical security (e.g. non-state actors weaponizing consumer drones), political security (e.g. through privacy-eliminating surveillance, profiling, and repression, or through automated and targeted disinformation campaigns), and social security (misuse of facial recognition technology in offices, schools and other venues).
  • Human brains hacking and replacement.
  • Destructive superintelligence — aka artificial general human-like intelligence

“Businesses investing in the current form of machine learning (ML), eg AutoML, have just been paying to automate a process that fits curves to data without an understanding of the real world. They are effectively driving forward by looking in the rear-view mirror,” says causaLens CEO Darko Matovski.

Anthropomorphism in AI, the attribution of human-like feelings, mental states, and behavioral characteristics to computing machinery, unlike inanimate objects, animals, natural phenomena and supernatural entities, is an immoral enterprise.

For all AAI applications, it implies building AI systems based on embedded ethical principles, trust and transparency. This results in all sorts of Framework of ethical aspects of artificial intelligence, robotics and related technologies, as proposed by the EC, EU guidelines on ethics in artificial intelligence: Context and implementation.

Such a human-centric approach to human-like artificial intelligence... "highlights a number of ethical, legal and economic concerns, relating primarily to the risks facing human rights and fundamental freedoms. For instance, AI poses risks to the right to personal data protection and privacy, and equally so a risk of discrimination when algorithms are used for purposes such as to profile people or to resolve situations in criminal justice. There are also some concerns about the impact of AI technologies and robotics on the labour market (e.g. jobs being destroyed by automation). Furthermore, there are calls to assess the impact of algorithms and automated decision-making systems (ADMS) in the context of defective products (safety and liability), digital currency (blockchain), disinformation-spreading (fake news) and the potential military application of algorithms (autonomous weapons systems and cybersecurity). Finally, the question of how to develop ethical principles in algorithms and AI design".

Now, as to Ethics guidelines for trustworthy AI, trustworthy AI should be:

(1) lawful - respecting all applicable laws and regulations

(2) ethical - respecting ethical principles and values

(3) robust - both from a technical perspective while taking into account its social environment

Again, all the mess-up starts from its partial human-centric definition:

“Artificial intelligence (AI) refers to systems that display intelligent behaviour by analysing their environment and taking actions – with some degree of autonomy – to achieve specific goals. AI-based systems can be purely software-based, acting in the virtual world (e.g. voice assistants, image analysis software, search engines, speech and face recognition systems) or AI can be embedded in hardware devices (e.g. advanced robots, autonomous cars, drones or Internet of Things applications).”

Or, updated definition of AI. “Artificial intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behaviour by analysing how the environment is affected by their previous actions. As a scientific discipline, AI includes several approaches and techniques, such as machine learning (of which deep learning and reinforcement learning are specific examples), machine reasoning (which includes planning, scheduling, knowledge representation and reasoning, search, and optimization), and robotics (which includes control, perception, sensors and actuators, as well as the integration of all other techniques into cyber-physical systems).” 

Why creating ML/AI based on human brains is not moral, but it will never be successful could be seen from the deep learning NNs, limitations and challenges, as indicated below:

DL neural nets are good at classification and clustering of data, but they are not great at other decision-making or learning scenarios such as deduction and reasoning.

The biggest limitation of deep learning models is they learn through observations. This means they only know what was in the data on which they trained. If a user has a small amount of data or it comes from one specific source that is not necessarily representative of the broader functional area, the models will not learn in a way that is generalizable.

The hardware requirements for deep learning models can also create limitations. Multicore high-performing graphics processing units (GPUs) and other similar processing units are required to ensure improved efficiency and decreased time consumption. However, these units are expensive and use large amounts of energy. Other hardware requirements include random access memory and a hard disk drive (HDD) or RAM-based solid-state drive (SSD).

Other limitations and challenges include the following:

Deep learning requires large amounts of data. Furthermore, the more powerful and accurate models will need more parameters, which, in turn, require more data.

Once trained, deep learning models become inflexible and cannot handle multitasking. They can deliver efficient and accurate solutions but only to one specific problem. Even solving a similar problem would require retraining the system.

There are many kinds of neural networks that form a sort of "zoo" with lots of different species and creatures for various specialized tasks. There are neural networks such as FFNNs, RNNs, CNNs, Boltzmann machines, belief networks, Hopfield networks, deep residual networks and other various types that can learn different kinds of tasks with different levels of performance.

Another major downside is that neural networks are a "black box" -- it's not possible to examine how a particular input leads to an output in any sort of explainable or transparent way. For applications that require root-cause analysis or a cause-effect explanation chain, this makes neural networks not a viable solution. For these situations where major decisions must be supported by explanations, "black box" technology is not always appropriate or allowed.

Any application that requires reasoning -- such as programming or applying the scientific method -- long-term planning and algorithms like data manipulation is completely beyond what current deep learning techniques can do, even with large data.

It might be a combination of supervised, unsupervised and reinforcement learning that pushes the next breakthrough in AI forward.

Deep learning examples. Because deep learning models process information in ways similar to the human brain, they can be applied to many tasks people do. Deep learning is currently used in most common image recognition tools, natural language processing (NLP) and speech recognition software. These tools are starting to appear in applications as diverse as self-driving cars and language translation services.

Future of Artificial Intelligence

Future_of_AI_1.jpg

Explainable AI can help humans understand and explain machine learning (ML) algorithms, deep learning and neural networks.

The two fields of machine learning and graphical causality arose and developed separately. A central problem for AI and causality is causal representation learning, the discovery of high-level causal variables from low-level observations.

Causal AI represents significant benefits to other industries, besides banking, financing and stock exchange. By improving staff and bed allocation, for example, and predicting the spread of diseases in real-time, it could save healthcare providers up to 15 percent in operational costs. The oil and gas industry could enjoy savings of at least $200bn by optimising transportation and storage, and more accurately predicting supply and demand. And more than $500bn in food waste could be saved each year if we could better predict microclimate and demand.

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • No comments found

Share this article

Azamat Abdoullaev

Tech Expert

Azamat Abdoullaev is a leading ontologist and theoretical physicist who introduced a universal world model as a standard ontology/semantics for human beings and computing machines. He holds a Ph.D. in mathematics and theoretical physics. 

   
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline