At their annual developer conference last week, Google CEO Sundar Pichai gave a demo of Google duplex: AI voice technology which – when used with Google assistance – can make phone calls for us to book appointments and reservations.
In the demo, the AI allowed Google Assistance to call a salon. The AI spoke to a human at the salon and booked a haircut appointment. What was particularly impressive was that the person at the salon could not tell that she was speaking to an AI powered robot, and not another human.
You can watch the demo here. It’s totally worth the time.
While the technological marvel of AI/robot making such a call was mostly seen in positive light, the ethics of it was not. The key ethic point raised by some was this: We (humans) should have the right to know whether we are speaking to another human or a robot. While Google did confirm later that the AI/robot – once developed into a final product - will disclose its identify, it got me thinking about the following.
How frequent is the use-case – of humans talking to robot – likely to be?
To be clear, I am not saying that human-robot communication will not happen. It will. But if you are the person in a salon receiving a call from a self-identifying robot, would you have any interest in continuing that conversation? Probably not. You may continue if you don’t have a choice i.e., if the same technology is not available to you at the salon yet but what if you have the technology? In that case, for most part, you will likely delegate the conversation to your AI robot. And when that happens, majority of the communication will be robot-robot.
What would such communication look like? My guess is that it’ll not look anything close to the phone call you saw in the above video. Furthermore, it'll not even be a phone call or for that matter any sort of verbal communication. Why?
Think about it. Why will robots need to use words in English – or for that matter any other language – to talk to each other? Robots are machines. Machines already know how to talk to each other. For decades, our laptops talk to all other laptops in the world through Internet Protocol. Our phones talk to each other (e.g., Airdrop if you use iPhone). Our home devices talk to other devices via wi-fi/IoT. None of the current interactions require these devices to use words in any language.
So why will AI-enabled robots?
And not to mention, use of words / language / verbal communication is probably one of the slowest ways to communicate. For example, how much information or words can you communicate in a second? Nothing meaningful, right? And if you had to be specific, I would say not more than a word, I guess. But in that same time, a 100Mbps connection can likely transfer >10MB of information i.e., equivalent to a large word doc or several hundred, thousands, millions of nuggets of information.
My sense is when - and if - we get to that point when robots have to talk to each other to get things done for us, they will do so in a way in which machines have talked to each other since time immemorial. Which is not a regular phone call.
What do you think? Do you think we are headed to a world where robots will talk for us? What would that look like? And why?
Views are my own and not of my employer or other organizations I may be associated with.
Nitin is the CEO of Mobile Doorman, a former Management Consultant at McKinsey & Company and Board of Trustees Member at the Naperville Public Library. He advises Company Boards, CEOs and BU heads on product strategy, growth and operations. He also has multiple US patents (5) for design and development of mobile, wireless and networking technology including a patent to extend smartphone battery life. Nitin holds a masters in Electrical & Computer Engineering from the University of Texas at Austin and an MBA in Marketing & Finance from the University of Chicago Booth School of Business.