Este sitio web fue traducido automáticamente. Para obtener más información, por favor haz clic aquí.
Updated

A new beer-pouring robot can predict human behavior a few seconds into the future, enough to know when to offer a refill.

Though it might seem frivolous, such research, funded in part by the U.S. Army and Microsoft, could lead to machines programmed to know when and where to offer a helping hand — or, in this case, a claw, scientists say.

The droid at Cornell University's Personal Robotics Lab is about the size of a full-grown adult and can roll around rooms on wheels, manipulating objects with two claw-tipped arms. The robot is named Kodiak — because Cornell's mascot is a bear, all of their robots are named after famous members of the bear family. [Read also: "3D Printers Demonstrate Rapid Robot Evolution"]

[pullquote]

Kodiak sees the world in 3D using a Microsoft Kinect camera, which is armed with an infrared scanner to help create 3D models of items. The Kinect was originally developed for video gaming, but is now being widely used by roboticists to help robots navigate rooms.

Using a database of 120 3D videos, Kodiak can identify activities it sees, such as when a person microwaves food or takes medicine. The robot analyzes how other items it sees might be a part of those activities, predicts a number of possible futures and can anticipate the most probable course of action — for instance, the robot can open a fridge door so a person can put a pot inside. As its database of activities grows, the droid constantly updates and refines its predictions.

"We extract the general principles of how people behave," said researcher Ashutosh Saxena, a roboticist at Cornell University.

Anticipating and responding to human behavior can be difficult because of the many variables involved. The robot essentially builds a "vocabulary" of small actions it can put together in various ways to recognize a variety of big activities.

<br />

"We thought that the people would be quite unpredictable in how they perform the activities," Saxena told TechNewsDaily. "We were quite surprised that our algorithm can work to the level of not only naming what could be done next, but also proposing exact trajectories of people would perform it."

In tests, the robot made correct predictions 82 percent of the time when looking one second into the future, 71 percent correct for three seconds and 57 percent correct for 10 seconds.

"Even though humans are predictable, they are only predictable part of the time," Saxena said. "The future would be to figure out how the robot plans its action. Right now, we are almost hard-coding the responses, but there should be a way for the robot to learn how to respond."

This research could help robots better interact with humans. Future examples include personal robots that assist people; factory and assembly-line robots that work alongside humans; surveillance droids that watch targets, telepresence robots that can automatically figure out how not to move in awkward positions; and self-driving cars that anticipate what drivers might do on the road.

Saxena cautioned the robot could only anticipate human behavior in the short-term and for only common, predictable activities.

"When doing more general tasks, or not doing mundane tasks, humans may not be predictable," Saxena said.

In the future, the upcoming latest model of the Microsoft Kinect could even further improve the robot's capabilities, Saxena said. "Furthermore, we also want to make robots learn by observing humans," he added.

Saxena and his colleague Hema Koppula will present their research at the International Conference of Machine Learning in June in Atlanta and the Robotics: Science and Systems conference later in June in Berlin.