A National Institute of Standards and Technology (NIST) physicist proposes optoelectronic model for artificial general intelligence (AGI) inspired by the brain.
American physicist Jeffrey M. Shainline, Ph.D., at the National Institute of Standards and Technology (NIST) explores the application of an optoelectronic architecture, inspired by neuroscience and the human brain, as a potential enabling pathway towards artificial general intelligence (AGI) in a new paper published in Applied Physics Letters.
Artificial intelligence (AI) is rapidly accelerating globally from its period of dormancy mainly due to deep learning inspired by the human brain, increased computational power, availability of big data training sets, the parallel processing power of GPUs, and cloud-based computing, among other factors.
AI machine learning technology is being incorporated across industry sectors and purposes such as brain-computer interfaces, mental health, astrophysics, movie ratings, extreme weather forecasting, synthetic biology, genomics, drug discovery, antimicrobials, life sciences, neuroscience, financial services, hospital patient care, biotechnology, cancer diagnostics, hearing aids, battery testing for electric vehicles, and more.
In 2020, worldwide investment in AI rose 40 percent from the year prior to USD 67.9 billion according to The AI Index 2021 Annual Report by Stanford University’s Human-Centered Artificial Intelligence Institute (HAI) that was authored by Erik Brynjolfsson, Daniel Zhang, Saurabh Mishra, and others.
Yet despite the global investment in artificial intelligence, machine learning today is largely composed of point-solutions that are fragile, narrow in scope, and resource-intensive. Currently, artificial general intelligence is not on the immediate horizon of possibilities. Or is it?
“To guide the design of hardware for AGI, we must consider insights from neuroscience regarding how neural systems integrate information across space and time to accomplish cognition,” wrote Shainline.
Drawing upon neuroscience and human cognition, in his scientific paper, Shainline posits that “artificial neural hardware should be designed and constructed to leverage photonic communication while performing synaptic, dendritic, and neuronal functions with electronic devices.”
“General intelligence involves the integration of many sources of information into a coherent, adaptive model of the world,” wrote Shainline. “To design and construct hardware for general intelligence, we must consider principles of both neuroscience and very large-scale integration. For large neural systems capable of general intelligence, the attributes of photonics for communication and electronics for computation are complementary and interdependent.”
Shainline’s approach is a departure from contemporary approaches of using light with silicon-based electronic components. Instead, he advocates the integration of superconducting electron components with photonics, versus semiconducting components. Specifically, Shainline proposes the combination of single-photon detectors, silicon light sources, and superconducting electronic circuits in a low-temperature environment as a construct for economic scalability and efficiency.
In physics, superconductivity refers to the quantum phenomena of certain materials where electrical resistance disappears and there is nearly perfect conductivity at extremely low temperatures as the materials expel magnetic flux fields.
Dutch physicist Heike Kamerlingh-Onnes discovered in 1911 that mercury’s electrical resistance vanished at the extremely low temperatures of a few degrees above absolute zero. He was later awarded the Nobel Prize in Physics in 1913, one out of five Nobel Prizes awarded for superconductivity research. The other years that included superconductivity for the Nobel Prize in Physics include 1972 (John Bardeen, Leon Neil Cooper, and John Robert Schrieffer), 1973 (Leo Esaki, Ivar Giaever, and Brian David Josephson), 1987 (J. Georg Bednorz and K. Alexander Müller), and 2003 (Alexei A. Abrikosov, Vitaly L. Ginzburg, and Anthony J. Leggett).
Shainline uses the biological brain as an analogy. “On the microscale, synapses, dendrites, and neurons are specialized processors comprising the gray matter computational infrastructure of the brain,” wrote Shainline. “On the meso-scale, cortical minicolumns of 100 neurons act as specialized processors, and on the macro-scale, brain regions play that role.”
In the research, Shainline designed a synapse that “detects a single near-infrared photon and requires no power to retain the synaptic state, a feature enabled by the dissipationless nature of superconductors.”
The schematics consist of a superconducting optoelectronic synapse that has a single-photon detector (SPD) with a Josephson junction and a flux-storage loop called a synaptic integration (SI) loop. The synaptic bias current is capable of dynamically adjusting the synaptic weight. The neuron cell body thresholds and sums signals from the synapses. When the threshold is reached, the transmitter circuit pulses light to communicate photons to downstream synapses. Additionally, the neuronal threshold current is capable of varying the neuronal threshold dynamically.
“It is the perspective of our group at NIST that hardware incorporating light for communication between electronic computational elements combined in an architecture of networked optoelectronic spiking neurons may provide the potential for AGI at the scale of the human brain,” Shainline wrote.
With the cross-disciplinary integration of physics, photonics, neuroscience, neuroanatomy, electrical engineering, and artificial intelligence machine learning, there is now an alternate approach based on superconductivity to explore in the ongoing quest to achieve artificial general intelligence in the future.
Copyright © 2021 Cami Rosso All rights reserved.
A version of this article first appeared on Psychology Today.