What The Heck Is Superintelligence?

What The Heck Is Superintelligence?

John Nosta 26/01/2024
What The Heck Is Superintelligence?

The development of superintelligence is a topic of much debate among scientists.

OpenAI has underscored the tremendous potential and associated risks of superintelligence, an evolution from the more traditional concept of Artificial General Intelligence (AGI). The company posits that this immensely capable technology could emerge within the current decade and could either solve significant global issues or lead to humanity’s disempowerment or extinction. OpenAI’s strategy involves creating an automated alignment researcher with human-level capabilities and utilizing vast computing resources to train and align superintelligence iteratively. This process, called superintelligence alignment, requires innovation in AI alignment techniques, extensive validation, and adversarial stress testing. OpenAI is dedicating significant resources and research towards addressing this challenge and encourages outstanding researchers and engineers to join their effort. However, it remains to be seen if the shift in terminology from AGI to superintelligence will have a profound impact on the ongoing debates around AI’s risks and benefits.

OpenAI has highlighted the potential of superintelligence, which could be the most impactful technology ever invented, capable of solving significant global problems. However, it also acknowledges the immense risks associated with superintelligence, including the disempowerment or even extinction of humanity.

Although superintelligence may seem distant, OpenAI believes it could emerge within this decade. Managing these risks will necessitate new governance institutions and addressing the challenge of aligning superintelligence with human intent. It’s interesting to note that OpenAI is using this term rather than the more conventional Artificial General Intelligence (AGI). Here’s their reasoning:

Here we focus on superintelligence rather than AGI to stress a much higher capability level. We have a lot of uncertainty over the speed of development of the technology over the next few years, so we choose to aim for the more difficult target to align a much more capable system. Current AI alignment techniques, such as reinforcement learning from human feedback, are insufficient for controlling a potentially superintelligent AI. Humans cannot reliably supervise systems that are much smarter than us, and existing techniques will not scale to superintelligence. OpenAI emphasizes the need for scientific and technical breakthroughs to overcome these challenges.

OpenAI’s approach involves building an automated alignment researcher with roughly human-level capabilities. Vast computing resources will be utilized to scale their efforts and iteratively align superintelligence. The key steps include developing a scalable training method, validating the resulting model, and stress testing the alignment pipeline. Drawing from the title of OpenAI announcement, this concept is being called superintelligence alignment.

To address the difficulty of evaluating tasks that are challenging for humans, AI systems can be employed for scalable oversight. Generalization of oversight to unsupervised tasks and the detection of problematic behavior and internals are crucial for validating alignment. Adversarial testing, including training misaligned models, will help confirm the effectiveness of alignment techniques.

OpenAI anticipates that its research priorities will evolve as more is learned about the problem, and they plan to share their roadmap in the future. They have assembled a team of top machine learning researchers and engineers dedicated to solving the problem of superintelligence alignment. OpenAI is committing 20% of their secured compute over the next four years to this effort.

While success is not guaranteed, OpenAI remains optimistic that a focused and concerted effort can solve this problem. They aim to provide evidence and arguments that convince the machine learning and safety community that the problem has been solved, and they are actively engaging with interdisciplinary experts to consider broader human and societal concerns.

OpenAI encourages outstanding researchers and engineers, even those not previously working on alignment, to join their efforts. They consider superintelligence alignment to be one of the most important unsolved technical problems and believe it is a tractable machine learning problem with the potential for significant contributions.

There seems to be a new fissure developing in the heated debate on AI, AGI and the complex interrelated issues from utility to human destruction. Now, the lexicon has changed a bit, but it’s yet to be seen if this is science or semantics.

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • No comments found

Share this article

John Nosta

Digital Health Expert

John is the #1 global influencer in digital health and generally regarded as one of the top global strategic and creative thinkers in this important and expanding area. He is also one the most popular speakers around the globe presenting his vibrant and insightful perspective on the future of health innovation. His focus is on guiding companies, NGOs, and governments through the dynamics of exponential change in the health / tech marketplaces. He is also a member of the Google Health Advisory Board, pens HEALTH CRITICAL for Forbes--a top global blog on health & technology and THE DIGITAL SELF for Psychology Today—a leading blog focused on the digital transformation of humanity. He is also on the faculty of Exponential Medicine. John has an established reputation as a vocal advocate for strategic thinking and creativity. He has built his career on the “science of advertising,” a process where strategy and creativity work together for superior marketing. He has also been recognized for his ability to translate difficult medical and scientific concepts into material that can be more easily communicated to consumers, clinicians and scientists. Additionally, John has distinguished himself as a scientific thinker. Earlier in his career, John was a research associate at Harvard Medical School and has co-authored several papers with global thought-leaders in the field of cardiovascular physiology with a focus on acute myocardial infarction, ventricular arrhythmias and sudden cardiac death.

   
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline