Read the paper on this research: [ Ссылка ]
Robots can be programmed to perform all sorts of repetitive tasks, but they don't adapt well to changing environments and circumstances. They rely on people to give them direction and orient them to a precise set of parameters that will not change. What if a person could simply tell the robot what is needed and that language could be understood and then acted upon, without the need for extensive programming?
That's the very problem that researchers are working on in the Robotics and Artificial Intelligence Laboratory at the University of Rochester. Assistant professor of electrical and computer engineering, Thomas Howard, and PhD student Jake Arkin have developed a model for processing natural language so that a robot can be given basic verbal commands and then act on them without the need for additional programming. This research was a joint effort with Rohan Paul and Nicholas Roy from MIT.
The model also offers a spatial representation of the environment in which the robot is operating so that it can discern between the placement of different objects and interact with them. If a table is filled with a grouping of identical objects, telling the robot to pick up the third one from the left will be enough for it to determine which object is the correct one and then pick it up.
Various cameras aid in the accuracy of the robots movements and its understanding of the space. Localized visual servoing, contributed by graduate student Siddharth Patki, allows for consistent execution of the demonstrated robot actions. As the model is refined, the robot will be able to adapt to increasingly complex environments and verbal commands, and to do so at an even more rapid pace.
Ещё видео!