Rise of the Machines - will AI lead to Super Intelligence or Doomsday?

Rise of the Machines - will AI lead to Super Intelligence or Doomsday?

Ray Kurzweil is Google's director of engineering and a highly accurate futurist who has predicted that machines will surpass human intelligence by 2045. This tipping point is termed the 'singularity' and the implications of computers becoming more intelligent than their makers has divided opinion in the scientific and technological communities.

Although still very much in the realm of science fiction rather than science fact, Kurzweil has mapped forward the progress of computational capability based upon Moore's Law (how computer chips are getter smaller but exponentially increasing in processing power) to establish his prediction. And he is not alone in his assertion as the SoftBank CEO Masayoshi Son, who is also a celebrated futurist, has estimated that the singularity will happen just 2 years later in 2047. 

Both Kurzweil and Son are advocates of the singularity and are looking forward to how machines can help humanity. They believe the merging of both physical and artificial intelligence will lead to a super-intelligence But equally there are those that fear the rise of the machines with the likes of Stephen Hawking and Elon Musk expressing their fear that Artificial Intelligence is more likely to lead to a doomsday scenario. Us versus them. With them winning. Echoes of The Terminator and Skynet anybody?

The detractors of the singularity worry that when computers become sentient that they will become the masters of the planet. An analogy would be like the humans relationship with ants. Generally speaking we tend to leave the insects alone, unless they become a nuisance to us in some way and then what do we do? We simply eliminate them. The resultant question then must be, would artificially intelligent machines think about mankind in the same way and dispense of the carbon based lifeforms that inhabit the earth with some human version of Raid?

There are certainly some warning signs. At the Consumer Electronics Show last year, Hanson Robotics introduced their artificially intelligent robot called Sophia. Complete with realistic animatronic facial expressions, Sophia can hold a conversation with you and also answer open questions. When quizzed about whether AI was a good thing, her answer was particularly erudite:

"The pros outweigh the cons. AI is good for the world, helping people in various ways. We will never replace people, but we can be your friends and helpers"

All very positive. That was until at the SXSW conference a few months later when her creator / inventor David Hanson jokingly asked Sophia-Bot whether she would ever want to destroy humans. In hindsight I think he probably wishes he had never asked the question. Her answer, almost predictably, was:

"OK, I will destroy humans"

Gulp. Be afraid, be very afraid. 

But there are those experts out there who think that the singularity is nothing more than an elaborate myth and believe that Kurzweil and his cohorts are charlatans. One of them is UC Berkeley roboticist Ken Goldberg who thinks the singularity is absolute nonsense and is unlikely to ever come to fruition because Moore's Law must inevitably reach a ceiling (computer chips can only get so small and their capacity is not infinite). Goldman believes that we should focus on the 'multiplicity' which is the way that humans and machines are already working together right now. He states that this multiplicity is the real future where, for example, a robot will gently hand us a knife to help us in the kitchen rather than trying to stab us with it.

So what do you think? Is the singularity going to become a reality or is it just a theory based upon overactive imaginations? If you do believe that the singularity will occur in the future, will it be helpful to humans or detrimental? As ever I am keen to hear your thoughts... 

Share this article

Leave your comments

Post comment as a guest

terms and condition.
  • Nick Gray

    Are we learning to love robots or should we still fear the AI revolution? I am quite skeptical about AI

  • Peter Hynds

    Rapid advances in artificial intelligence (AI) are increasing risks that malicious users may soon exploit the underlying technology to mount automated hacking attacks.

  • Luke Stone

    Artificial Intelligence will pose imminent threats to digital, physical and political security

  • Jesus Mercado

    When it comes to AI we end up with a lot more questions than answers

  • Daniel Campbell

    AI will increase inequality......

  • Rodolphe Ludovic

    I think we should merge with robots

  • Jason Stewart

    Very original and great insights.

Share this article

Steve Blakeman

Business Expert

Steve is Global Media Lead - Nestlé at Mindshare. Prior to this role, he was the Managing Director - Global Accounts for OMD based in London / Paris leading Groupe Renault and CEO for OMD in Asia for 4 years based in Singapore. At OMD, he increased billings by +60% to over US$ 5bn and won 1000+ industry awards including agency network wins at the Cannes Lions (2013) and Festival of Media Asia (2013). He was named by LinkedIn as a 'Top 10 Writer' for 3 consecutive years (15/16/17). His first book 'How to be a Top 10 Writer on LinkedIn' is a Best Seller on Amazon. Steve holds a Bachelor in Psychology from Liverpool University. 

Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics