A new study from researchers at Google has made serious progress toward robots that learn to navigate the world without any human assistance, reports Technology Review.
RELATED: IRAN'S MOST ADVANCED ROBOT RIVALS HONDA'S ASIMO
A self-learning robot from virtual environments
This new study builds on research conducted a year earlier, when the group of researchers first learned how to make a robot learn in the real world. The ability to reinforce learned behavior is a common practice in simulation — a virtual clone of the robot flails helplessly around a virtual copy of its environment until the AI algorithm has adapted enough to operate well in the real world. Then the program is imported into the robot, and switched on.
Obviously, this method helps the robot avoid damage because it no longer needs to do trial-and-error runs in the real world, where consequences for failure are too high for the risk. However, the robot requires a model that's easy to simulate — scattered gravel or springs of a mattress to soften a robot's metal footing take so long to simulate it's not worth the effort.
This is why researchers sought to avoid the problems of modeling by training the robot in the real world at the start. To do this, they designed a more efficient algorithm capable of learning with fewer trials and fewer errors; sending the robot walking within two hours. Since the physical environment naturally varies, the robot can also adapt quickly to other relatively similar environs, like steps, mild inclines, and flat areas with obstacles.
The reality principle as algorithm
However, the robot still needed a human babysitter to jump in hundreds of times, said Jie Tan, a coauthor to the paper and leader of the robotics locomotion team at Google Brain, to Technology Review. "Initially I didn't think about that," he said.
That became a new problem. The first step to facing it down was to bound the explorable terrain available to the robot, and have it train through multiple maneuvers simultaneously. When the robot reached the edge of a bounded area while learning how to forward-walk, it simply reversed direction and learned how to walk in reverse.
Next, the researchers constrained the movements available to the robot during its trial, minimizing damage via caution and avoiding falls. Of course, the robot fell anyway, so they added another algorithm so it could stand up.
As tweaks and adjustments amassed, the robot became capable of walking on its own across disparate surfaces, including flat ground, a doormat with crevices, and a memory foam mattress. This work has potential for future applications, ones where robots need to move through rough, unforgiving terrain without any humans around to help.
"I think this work is quite exciting," said Chelsea Finn, an assistant professor at Stanford affiliated with Google but not part of the research, to Technology Review. "Removing the person from the process is really hard. By allowing robots to learn more autonomously, robots are closer to being able to learn in the real world that we live in, rather than in a lab."
But, she warns, there's a catch: The present setup uses a motion capture system that scans the robot from above to track its location. That's not so in real-world scenarios.
In the future, researchers plan on adapting their new algorithm to different robots, or even multiple robots learning at the same time, in the same environment. Tan thinks the trick to unlocking more useful robots lies in cracking locomotion.
"A lot of places are built for humans, and we all have legs," he said to Technology Review. "If a robot cannot use legs, they cannot navigate the human world."
From military applications to helping humans like a service dog, the future of robots makes robotics one of the most enticing engineering careers for the foreseeable future.