What if it was enough to show robots a single expedition to teach them what to do? The artificial intelligence researchers at Nvidia have developed a system that will ensure that this happens. The robot learns how to perform a specific task at a glance.
Recent developments show us that robots can be learned one day by verbal communication and correct themselves by reading our minds when they make mistakes. However, robots can share this knowledge with each other through the cloud. Researchers at Nvidia note the importance of proper communication with robots:
"It must be easy for him to tell him the task so that the robots can perform useful tasks in real-world conditions."
For this experiment, researchers trained a series of neural networks to recognize four different color blocs and a car. These neural networks, working on Nvidia's Titan X graphics unit, tried to imitate these movements with a paw that they followed human actions that set the camera and block in a certain order.
It is not easy to accomplish. First, the object recognition network is trying to figure out what's in front of the camera. He then 'thinks' about how to combine other neural network components and what needs to be done to achieve this. Finally, the claw moves in the direction of the determined program. At the same time, the robot becomes aware of its mistakes. If he makes a mistake at any stage, he realizes he can not reach his goal and then he tries again.
Researchers indicate that this learning technique will significantly speed up the process of teaching robots new things.