14 Robots Teach Themselves to Walk Into a Bar

Can robots be trained to pick up objects they've never seen before? Recent research indicates that the best approach may be to allow robots to experiment on their own, and then share that knowledge with other robots. 

How To Train Your Robot

Until now, robots were programmed to recognize specific objects and to relate to those objects in a pre-programmed fashion. This is a good approach for predictable tasks, in planned locations, with previously experienced objects. In the real world, life is what happens while you’re making other plans.

At Google, Sergey Levine and his team took fourteen robotic arms, networked them together, and used (CNN) convolutional neural networks to let these robots learn, on their own, how to pick up small objects including a cup, a tape dispenser, and a toy dolphin.

Over time, and nearly one million practice runs, the robots were able to self-correct and optimize their actions. They learned to pick up objects faster and more often, employing self-taught strategies - such as pushing an object away in order to reach another, and developing different techniques to pick up soft versus hard objects.

The robots learned from practice and sharing knowledge with one another, without human interaction or pre-programming. Once the experiment was under way, the only human contribution was to replace the objects in the trays.

Click here to watch.

During the study, the robots were positioned in front of trays with randomly placed objects varying in shape, scale, weight and hardness. Each of the 14 robot arms had cameras, grasping mechanisms and sensors in a similar, but not identical design, to increase the diversity of knowledge. 

Each robot’s successes and failures were fed into a shared convoluted neural network, a “robo-wiki” of shared knowledge accessible to the 14 robots. 

“Our experimental evaluation demonstrates that our method achieves effective real-time control, can successfully grasp novel objects, and corrects mistakes by continuous robotic hand-eye coordination...” 

Google’s project builds on the concept that robots can teach other robots how to perform specific tasks—or, more accurately, how one artificial intelligence can help another AI improve, without a human ever intervening. 

Here's a link to the Google Research: Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection - Sergey Levine, Peter Pastor, Alex Krizhevsky, and Deirdre Quillen.

The robots taught each other to pick up hard and soft objects in different ways, without human guidance. 

For an object that was perceived as rigid the robots would just grasp the outer edges and squeeze to hold it tightly. But for a soft object, like a sponge or the dolphin, the robots "learned" that it was easier to place one gripping "finger" in the middle and one around the edge, and then just squeeze.

The good news is that the robots still needed humans to place objects in the trays.

A Brown University project called “RoboBrain,” plans to store all machine-learned behaviors into a knowledge database which other bots can tap into. The project has been nicknamed “the world’s first knowledge engine for robots.” Some people call it, "SkyNet."

Robo Brain is a large-scale computational system that learns from publicly available Internet resources, computer simulations, and real-life robot trials.

It accumulates everything robotics into a comprehensive and interconnected knowledge base. Applications include prototyping for robotics research, household robots, and self-driving cars. 

The Claw Is Our Master: He chooses who will go and who will stay.

Regarding robots walking into a bar... you would think that 14 robots using convolutional neural networks would be great at telling jokes. They're not.

The best robot-written humor I've found is derivative of human jokes. For example, the CMU bot, dubbed Data, began a recent robot standup session with,

“A doctor says to his patient, ‘I have bad news and worse news. The bad news is that you only have 24 hours to live.'” The patient wondered how the news could possibly get worse. “I’ve been robo-dialing you since yesterday...”

More on robots, AI and humor, here: It's hard work being funny--especially for robots.

Click this link for more information regarding Deep Learning and Convoluted Neural Networks.

(c) David J. Katz, 2018 - New York City

---------------------

David J. Katz is chief marketing officer at Randa Accessories, an industry-leading multinational consumer products company, and the world's largest men's accessories business. 

His specialty is collaborating with retailers, brands and suppliers to innovate successful outcomes in evolving markets. 

David was selected by LinkedIn as a "Top Voice in 2017." He has been named a leading fashion industry "Change Agent" by Women's Wear Daily and a "Menswear Mover" by MR Magazine.

He is a public speaker, co-author of the best-selling book "Design for Response: Creative Direct Marketing That Works" [Rockport Publishers]. He has been featured in The New York Times, The Wall Street Journal, New York Magazine, The Huffington Post, MR Magazine, and WWD.

David is a graduate of Tufts University and the Harvard Business School. He is a student of neuroscience, consumer behavior and "stimulus and response." The name Pavlov rings a bell.

Share this article

Leave your comments

Post comment as a guest

  • Simon Walker

    I have never thought one second that a robot could teach another robot a specific task. These machines are getting smarter each and every single day.

  • Carl Richie

    I don't wanna live on this planet anymore

  • Lewis Ratford

    Why don't we invest on human intelligence ? Can't we make our brain smarter ?

  • Gareth Wyatt

    This is so creepy..........

  • Rob Tunks

    I like your sense of humour but I hate robots

  • Christopher Robertson

    Fun read !! Thanks David