Why We Really Don’t want Artificial Intelligence to Learn from Us

Why We Really Don’t want Artificial Intelligence to Learn from Us

Brett King 11/07/2018 5

Deep learning is a term we’re increasingly using to describe how we teach Artificial Intelligence (AI) to absorb new information and apply it in their interactions with the real world. In an interview with the Guardian newspaper in May 2015, Professor Geoff Hinton, an expert in artificial neural networks, said Google is “on the brink of developing algorithms with the capacity for logic, natural conversation and even flirtation.” Google is currently working to encode thoughts as vectors described by a sequence of numbers. These “thought vectors” could endow AI systems with a human-like “common sense” within a decade.

Some aspects of communication are likely to prove more challenging, Hinton predicted. “Irony is going to be hard to get,” he said. “You have to be master of the literal first. But then, Americans don’t get irony either. Computers are going to reach the level of Americans before Brits…

Professor Geoff Hinton, from an interview with the Guardian newspaper, 21st May 2015

These types of algorithms, which allow for leaps in cognitive understanding for machines, have only been possible with the application of massive data processing and computing power in recent years. AlphaGo the AI that successfully beat Fan Hui, Europe’s reigning Go champion, in a five-match tournament, learned not on the basis of an expert system with a hard coded rules engine, but by actually learning to play Go. In contrast, the IBM chess computer Deep Blue, which famously beat grandmaster Garry Kasparov in 1997, was explicitly programmed to win at the game. This led researchers in 1997 to believe that we were 100 years away from a computer being able to compete with a human playing the ancient game of Go

It may be a hundred years before a computer beats humans at Gomaybe even longer,’’ said Dr. Piet Hut, an astrophysicist at the Institute for Advanced Study in Princeton, N.J., and a fan of the game. ‘’If a reasonably intelligent person learned to play Go, in a few months he could beat all existing computer programs. You don’t have to be a Kasparov.’’ When or if a computer defeats a human Go champion, it will be a sign that artificial intelligence is truly beginning to become as good as the real thing. To Test a Powerful Computer, Play an Ancient Game.

George Johnson, New York Times Science, first appeared July 29, 1997

That prediction was clearly wrong. In March of 2016, one of the world’s best players of Go, Lee Sedol, faced off against AlphaGo. With the 37th move of game two, AlphaGo executed a move that confounded both Sedol and the commentators observing the match, one commentator saying “I thought it was a mistake”. Fan Hui, the player who first lost to AlphaGo who was observing the match was heard to say, “So beautiful…so beautiful” when he realized that the move was no mistake, but simply one counterintuitive to a human player — a move that quickly led AlphaGo to victory. It took the champion Sedol nearly 15 minutes after the match to come to terms with what had happened and respond.

Lesson One: AlphaGo had learned to improvise well beyond the simple parameters of just learning the best moves of human players. AIs that learn can already go beyond conventional logic and programming, and will innovate in a way we may not comprehend to reach a goal. This may be just one reason they exceed our capability for specific tasks.

The deep learning techniques we are employing today mean that AI research and development has hit milestones we never dreamed possible just a few years ago, it also means machines are learning at an unprecedented rate. So just what are we observing about how AI learn? What are the ultimate goals and outcomes of machines that learn?

Is the Turing Test or a machine that can mimic a human the required benchmark for Artificial Intelligence? Not necessarily. First of all, we must recognize that we don’t need a Machine Intelligence (MI) to be completely human-equivalent for it to be disruptive to employment or our way of life. To realize why a human-equivalent computer “brain” is not necessarily the critical goal, by understanding the progression AI is taking through three distinct evolutionary phases, we can understand the short-term and long-term considerations in machine learning:

  • Machine Intelligence (MI) 
    Machine intelligence or cognition that replaces some element of human thinking, decision-making or processing for specific tasks, and does those tasks better (or more efficiently) than a human could.
  • Artificial General Intelligence (AGI)
    Human-equivalent machine intelligence that not only passes the Turing Test, responds as a human would but can also make human equivalent decisions, and could perform any intellectual task a human could
  • Hyperintelligence (HAI)
    An individual or collective machine intelligence (what do you call a group of AIs?) that have surpassed human intelligence on an individual and/or collective basis, such that they can understand and process concepts that a human could not

MIs like IBM Watson, AlphaGo or an autonomous vehicle may not be able to pass the Turing Test today, but are already demonstratively better at specific tasks than their human progenitors. Let’s take the self-driving car as an example. Statistically speaking Google’s autonomous vehicles (in beta) completed 1.5 million miles before its first incident in February of 2016. Given the average human driver has an accident every 140–165,000 miles, that means that Google’s MI is already roughly 10x better or safer than a human, and that’s the beta version.

Google’s autonomous vehicles learn through the experience of millions of miles and being faced with unexpected elements where a split second reaction to specific data or input is required. It’s all about the data. Google’s autonomous vehicles process 1 Gbit of data every second to make those decisions. Will every self-driving car “think” and react the same though?

Audi has been testing self-driving cars, two modified Audi RS7s that have a brain the size of a PS4 in the boot, on the racetrack. The two race-ready Audi vehicles aren’t yet completely autonomous, in that the engineers need to first drive them for a few laps so that the cars can learn the track boundaries. The two cars, known as Ajay and Bobby, have interestingly developed different driving styles despite identical hardware, software, setup and mapping. Despite the huge amount of expertise on the Audi engineering team, the engineers can’t readily explain why there is this apparent difference in driving styles. It just appears that Ajay and Bobby have learned to drive differently based on some data point in the past.

Lesson Two: AIs will learn differently from each other even with the same configuration and hardware, and we may not know why they act with individuality. That won’t make them wrong, but by the time they exhibit individual traits, we probably won’t know the data point that got them there.

So AIs are learning like never before and they are demonstrating both the ability to learn and the ability to show individuality (albeit based on the data they’ve absorbed). What happens, however, when we don’t curate the data AIs are using to learn, and just expose them to the real world?

Developers at Microsoft were unpleasantly surprised by how their AI Twitter Bot “Tay” adapted to inputs it received from the crowd when it suddenly started tweeting out racist and profanity-laced vitriol. As of the time of press (or when I’m writing this blog) the search term “Microsoft Tay” is the most popular search term associated with Microsoft today. This is what they said on their blog about the … um … incident.

As many of you know by now, on Wednesday we launched a chatbot called Tay. We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay. Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values.

Learning from Tay’s introductionOfficial Microsoft Blog

If you want to see some of the stuff that Tay tweeted, head over here (warning; some of her tweets make Donald Trump look tame).

Tay’s introduction by Microsoft was not just an attempt to build an AI that learnt from human interactions, but also one that potentially enriched Microsoft’s brand and was designed also to harvest users information such as gender, location/zip codes, favourite foods, and so on (as was the Microsoft Age guessing software of last year). It harvested user interactions alright, but after a group of trolls launched a sustained, coordinated effort to influence Tay, the AI did exactly what Microsoft designed it to do — it adapted to the language of it’s so-called peers.

Tay appears to have accomplished an analogous feat, except that instead of processing reams of Go data she mainlined interactions on Twitter, Kik, and GroupMe. She had more negative social experiences between Wednesday afternoon and Thursday morning than a thousand of us do throughout puberty. It was peer pressure on uppers, “yes and” gone mad. No wonder she turned out the way she did.

I’ve Seen the Greatest A.I. Minds of My Generation Destroyed by Twitter, New Yorker article, March 25th, 2016

Tay is a lesson to us in the burgeoning age of AI. Teaching Artificial Intelligences is not only about deep learning capability, but significantly about the data these AIs will consume, and not all data is good data. There’s certainly a bit of Godwin’s Law in there also.

When it comes to AI sensibility, culture and ethics, then we can not leave the teaching of AIs to chance, to the simple observation of humanity. What we observe on social media today, and even in the current round of presidential primaries, are not our proudest moments as a modern human collective. Some have argued that consciousness needs a conscience, but there’s also a growing school of thought that AI doesn’t need human equivalent consciousness at all.

In humans, consciousness is correlated with novel learning tasks that require concentration, and when a thought is under the spotlight of our attention, it is processed in a slow, sequential manner. Only a very small percentage of our mental processing is conscious at any given time. A superintelligence would surpass expert-level knowledge in every domain, with rapid-fire computations ranging over vast databases that could encompass the entire internet. It may not need the very mental faculties that are associated with conscious experience in humans. Consciousness could be outmoded.

The problem of AI consciousness, Kurzweil.net, March 18, 2016

There are two things we will need to teach AI if they are going to co-exist with us in a way that humans co-exist today, i.e. imperfectly. We will need to teach AI both empathy for humans and simple ethics. In the balance between empathy and ethics, a self-driving car could make a decision to avoid hurting bystanders, to the likely detriment of the passenger. Ultimately this is a philosophical question, one that we have been arguing well before the emergence of simple AI.

It strikes me that Asimov with his three laws of robotics was so far ahead of his time, that all we can do is wonder at his insight. For now, Microsoft Tay has taught us a valuable lesson — we don’t really want AIs to learn from the unfiltered collective that is humanity.

We really want AIs that learn only from the best of us. The toughest part of that will be us simply agreeing on who the best of us are.

Lesson Three: AIs need boundaries, and for the foreseeable future, humans will need to curate content that AIs learn from. AIs that interact with humans will ultimately need empathy for humans and basic ethics. Some sort of ethics board that regulates commercial AI implementation might be required in the future. AI and robot psychology will be a thing.

About the Author

Brett King is a widely recognised top 5 FinTech influencer. He is a futurist, an Amazon bestselling author, an award winning speaker, hosts a globally recognized radio show (Breaking Banks), is the CEO of Moven, and in his spare time enjoys flying as an IFR pilot, scuba diving, motor racing, gaming (mostly FPS) and Sci-Fi. He advised the Obama administration on the Future of Banking, and has spoken on the future in 50 countries in the last 3 years.

Breaking Banks, #1 show on VoiceAmerica Business, is the leading global fintech podcast with more than 5.5 million listens from 172 countries. Breaking Banks broadcasts, are live every Thursday at 3pm EST in NYC on 1160AM WVNJ Radio and globally via VoiceAmerica’s Business Channel. 

His latest book Bank 4.0: Banking everywhere, never at a bank will be shortly released on Amazon.

Share this article

Leave your comments

Post comment as a guest

0
terms and condition.
  • Mark Condon

    Awesome read

  • Priya Shenoy

    Very insightful

  • Jason Harvey

    So great ! So true !

  • Rich Skinner

    Great post. Thanks for sharing.

  • Mr adams

    Honestly frightening to think what capabilities an AI would have if it gets to learn. But as demonstrated- the chatbot tay with his tweets- it could be tough for us humans with AI bots that capable of learning. We should be careful about this and think twice before releasing the leash of wildly capable of damage bots.

Share this article

Brett King

FinTech Expert

Brett King is a futurist, best selling author, award winning speaker and host of a globally recognized radio show. He is also co-founder and CEO of Moven, a New York-based $200m mobile banking startup with over a million users. He is widely regarded as one of the top 5 global influencers in financial services, and his book Augmented was cited by China's President Xi Jinping as recommended reading on artificial intelligence. He advised the Obama administration on the Future of Banking, and has spoken on the future in 50 countries in the last 3 years. Brett focuses on how technology is disrupting business, changing behaviour and influencing society. He has fronted TED conferences, given opening keynotes for Wired, Singularity University’s Exponential Finance, The Economist, SIBOS and many more. He appears as a commentator on CNBC and has appeared regularly on the likes of BBC, ABC, FOX, Bloomberg and more. His radio show, Breaking Banks, began in May 2013. It was the first global show and podcast on FinTech, and has grown to be the most popular with an audience in 140 countries/ 3.6 million listeners.

   
Save
Cookies user prefences
We use cookies to ensure you to get the best experience on our website. If you decline the use of cookies, this website may not function as expected.
Accept all
Decline all
Read more
Analytics
Tools used to analyze the data to measure the effectiveness of a website and to understand how it works.
Google Analytics
Accept
Decline