Should We Fear Artificial Intelligence ?

Should We Fear Artificial Intelligence ?

The idea of a super-intelligent machine taking control of humans sounds like a far-fetched plot of a science-fiction novel. But in future societies, experts predict that super-intelligent artificial intelligence (AI) will play a key role – and some are worried that it could be humanity’s greatest mistake.

Artificial Intelligence is developing significantly following a very fast pace. In today's society, this development has not gone beyond expectations, being considered relatively ‘weak’, not living up to the hype. Indeed, the current super machines are only able to perform ‘narrow’ tasks, such as facial recognition or performing an internet search. However, it is important to state that the next generation of machines, super-intelligent artificial intelligence, will be far more powerful than the human brain. Though, super AI won't be threatening humanity yet, many of us will be affected its actions and decisions.

‘Strong’ Artificial Intelligence?


However, the main aim of research and development in AI is the creation of what is known as Strong AI. This type of AI would not be limited to certain jobs, and would outperform humans in practically every task it can be applied to. A survey of experts in the AI field suggests that there is a 50% chance of strong AI being created by 2050, and a 90% chance by 2075. It is this form of AI that would be the basis of super-intelligence in the future.

The potentially more apocalyptic implications of such intelligence are most prominently described by Nick Bostrom, a Swedish philosopher based at the University of Oxford. Bostrom hypothesises a Darwinian scenario; he believes that if humans create a machine superior to us and allow it to learn through the internet, there is no reason why it would not act to secure its dominance over the humans race changing the landscape of the biological world. Mr Bostrom used the example of humans and gorillas to demonstrate the one-sided nature of the relationship, and how an inferior intelligence always depends on a superior one for survival.

In addition to the warnings of Elon Musk and other renown scientists , the Future of Life Institute has stated the main two ways in which super-intelligence could be dangerous for humanity. The first is if it is programmed to ‘do something devastating’, such as an autonomous weapon, then we should expect mass casualties. It goes on to describe the catastrophic implications of an AI arms race, including how weapons would be designed to be extremely hard to turn off thwarting enemies, meaning that the situation could escalate out of human control.

The other dangerous situation is if AI is programmed to perform something beneficial, but forms a destructive method of carrying it out. This could happen at any time that the AI machine’s objectives are not perfectly aligned with humans. For example, if a car was programmed to drive across a city to a certain location without programming it with perfect knowledge of traffic systems, it would cause utter havoc. In scenarios involving automated weapons, this could have even more severe consequences.

An Existential Threat?


Clearly, super-intelligence, if not implemented safely, could have some rather disastrous repercussions for humanity. But is it an existential threat? Many people aren’t convinced.

One main argument is simply that humans aren’t foolish enough to build a machine that would be more intelligent than us without having sufficient safeguards in place – particularly if the AI had even a tiny chance of destroying humanity. In line with this argument is the view that if someone was able to create a potentially deadly AI that was out of human control, then humans could simply create another AI to destroy the first ‘evil’ one. Essentially, the situation would not be as instantly apocalyptic as some may believe.

Furthermore, some people challenge the opinion of Bostrom with regards to his belief that super-intelligent AI would naturally seek to dominate humans as the superior species. They argue that intelligence does not correlate with a desire for power, also stating that the will to dominate others is very much a trait among a select few humans and has no reason to be shared by AI.

There is also a third view worth mentioning, which holds a negative view of the development of AI in general. Japan is already using robots to take care of  old people, and some believe that this is dehumanising.  Will human interaction be replaced by that of technology? AI is also being used in industry, replacing human workers, and a similar question is being raised. Will machines take our jobs ? AI is still raising a lot of eyebrows. 


To conclude, the threat that super-intelligence poses to humanity is in many ways a matter of perspective and interpretation – but it’s certainly not an issue to be taken lightly. To quote the great Stephen Hawking:

The rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity.


Share this article

Leave your comments

Post comment as a guest

terms and condition.
  • Adrien Dorsey

    No I don't think we should fear AI, it will more help us than hurt us. I am sure scientists will find a way so we can control it ;)

  • Ilan Miguel

    I am scared that robots will takeover our jobs and replace us. Even universal salary won't be enough. They won't simply take our jobs, but they will also rule over us. We are creating monsters !!!!!!!!

  • omid

    In reply to: Ilan Miguel

    let me say something so simple to you. if you believe ai will become super intelligent like god !!! why they are need to destroying us ??? what will you gain if you kill your mother and father .you looking at them like they are humans they are not greedy they are not selfish they are not violence. we have problem not robots! we can teach them be good guy or be bad .

  • Patrick Vargas

    The truth is we cannot make a judgement yet. Super AI doesn't exist yet, relax guys ;)

  • Christian Evans

    It's kinda difficult to answer this question.......

  • Shishir Mishra

    Stephen Hawking is cautious about the use of AI and I totally understand him. It could be the best thing to ever happen to us if we master its usage. However, on the other hand, if we allow these machines to collect and analyse data far better than us, then I am afraid to said that a "Terminator" scenario is more than plausible. It's up to the tech companies and world governments to monitor closely the advancements of AI.

  • Jeff Smith

    I am really excited to see what will happen in 2050 or 2075. I am not scared of robots/ artificial creatures :D

  • Lee Yung Shun

    Super Interesting

  • Zack McClean

    AI is cool, you will all be surprised that these machines will make our society better than before.

  • Natan Santos

    good post !!

  • Jude Sune

    I know that our world is evolving, but the human species is at stake. If we don't act, sooner or later, we will see the repercussions of our actions. For centuries, we have harmed this planet. Now it's payback time. Our own creations will replace us.

  • Mr Adams

    It is now almost as clear as crystal to everyone about the potential threats strong AI could hold in the future. Movies like terminator coming to reality could mean the end of humankind and losing jobs seems to be the lease of our troubles. But we can change this. As Stephen Hawking said it will be either the best or the worst thing. Since we know it can lead up to the worst thing, we can work together, raise awareness and make sure that never happens, can't we?

Share this article

Frank Owusu

Tech Expert

Frank is a Senior Digital Strategy Associate within Parliamentary Digital Services, helping the UK Parliament to unlock innovative opportunities through the alignment of both their digital and technology strategies. His role requires driving successful cultural capability change, process improvement, analytics and the development of quality digital products within the Digital Portfolio. He holds an MSc in Innovation, Entrepreneurship & Management from Imperial College London.

Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics