The recent news from Facebook about shutting down its AI system that developed a language of its own opened up the debate again about the “existential threat” that AI brings to our world. A similar news came a few months ago from Google. Google AI translation tool invented its own internal language.
For the layman who does not understand the science and engineering that goes into building these systems, it’s a scary proposition indeed. We start imagining the “Matrix” or for those that watched the series “The 100”, a world where computers decided that the best way to save the planet is by eliminating the humans.
How far is the current state of the AI systems from those possibilities? Many argue that the systems of today are specialized in a single set of tasks and do that better. They don’t really have the general intelligence that is common in humans to handle multiple dimensions. For example an AI system that is best at detecting people from photos will not be able to guide us through the city traffic.
While eventually I do believe that we will crack that “general intelligence” problem and have our R2D2, it will be some time until then. There are already attempts in that direction. Deepmind’s AlphaGo project is one such example. Deepmind’s researchers have used advanced tree search and deep neural networks to build a system that thinks and innovates. AlphaGo's 4-1 victory in Seoul, South Korea, in March 2016 was watched by over 200 million people worldwide. A landmark achievement that earned AlphaGo a 9 dan professional ranking, the highest certification. It was the first time a computer Go player had ever received the accolade. Interesting to note is that in the 4 games that it won, AlphaGo used some moves that broke the barriers of traditional thinking, something the best minds in the game have not thought so far. The system learned and innovated.
The world of science fiction is already here and the world is divided into those that are excited and ones that are apprehensive. The recent spat between Mark Zuckerberg and Elon Musk sums up this divide. Ones like Zuckerberg are more optimistic and upbeat about the possibilities. However Elon Musk, Bill Gates and others have raised concerns over AI making decisions that humans cannot control or influence.
The pertinent questions is, should we allow this innovation in the field of AI systems unabated? Is it time now for regulatory or some other kind of safeguards to be put in place to prevent things from going out of hand. It's a difficult one to answer because regulation typically stifles innovation , but no regulations could lead to tyranny, more so annihilation if AI find its way into defense systems (and they will). The militaries are ones of the first typically to pick up on new technologies and push the envelope.
From a purely technological standpoint, we definitely need this ability of a computer to solve problems on its own. It will have great applications in the fields of medicine, surgery, astronomy and countless others. If some day we are to fulfill our dream of leaving this planet and finding other worlds, we need to build systems that can figure out the solutions themselves and find the best ones that we could never think of. The problems of tomorrow need thinking machines and not dumb boxes that do exactly what they are told. At the same time, we need to start thinking about the safeguards that need to be put in place to prevent our worst nightmares from coming true. No one wants those ICBMs to launch on their own.
Asimov’s Laws give us a good direction.
A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
However they are quite generic and open ended. Translating them in a way that AI systems could understand and follow would not be trivial. Especially because as will be the case, systems will be build upon other systems. One weak link would leave open a vulnerability that could be disastrous. The principles outlined by Satya Nadella are more practical and relevant to the state of the current technology in this space. He talks about building AI systems that are transparent, without bias, assist humanity and not destroy the dignity of people among others.
While a lot of research and resources will be thrown to further the state of the AI technology, a lot needs to go to build those safeguards as well. AI spending is forecasted to grow from $640m in 2016 to $37bn by 2025, according to market research firm Tractica. A significant portion of these need to go to build the safeguards. The corporations will and should invest in research that helps us invent better ways of building these systems. Google's Deepmind and Open AI are already partnering In this direction. They recently released a research article outlining a more safer method of machine learning. One of the problems they are trying to solve is the AIs tendency to “cheat” by finding the most efficient solution mathematically which may not be the best one indeed. Governments and universities too have a much bigger role to play in funding and incentivising research in this area.
AI is going to touch every aspect of our lives. A Stanford case study explored how AI will impact urban life. They predict that by 2030, AI will drive specialized applications in the fields of transportation, education, health care, entertainment, public service and safety. Robots powered by AI will be part of our homes and workplaces. This is going to open up a whole lot of possibilities and challenges.
Living in the age of thinking machines is going to be vastly different than the way it is today. It is going to be exciting and scary at the same time. No one can predict the possibilities 20 or 50 years from now.
The only sure thing is this.
Thinking machines are here to stay.
Leave your comments
Post comment as a guest