14 March 2009

Robot Responds to Human Gestures

Imagine a day when you turn to your own personal robot, give it a task and then sit down and relax, confident that your robot is doing exactly what you wanted it to do. So far, that autonomous, do-it-all robot is the stuff of science fiction or cartoons like ‘The Jetsons’. But a Brown University-led robotics team has made an important advance: The group has demonstrated how a robot can follow nonverbal commands from a person in a variety of environments — indoors as well as outside — all without adjusting for lighting. They have created a novel system where the robot will follow the user at a precise distance. A video that shows the robot following gestures and verbal commands can be found in the Brown University release. The team also successfully instructed the robot to turn around (a 180-degree pivot) and to freeze when the student disappeared from view — essentially idling until the instructor reappeared and gave a nonverbal or verbal command. The Brown team started with a PackBot, a mechanized platform developed by iRobot that has been used widely by the U.S. military for bomb disposal, among other tasks. The researchers outfitted their robot with a commercial depth-imaging camera. They also geared the robot with a laptop that included novel computer programs that enabled the machine to recognize human gestures, decipher them and respond to them.

The researchers made two key advances with their robot. The first involved what scientists call visual recognition. Applied to robots, it means helping them to orient themselves with respect to the objects in a room. Robots can see things, but recognition remains a challenge. The team overcame this obstacle by creating a computer program, whereby the robot recognized a human by extracting a silhouette, as if a person were a virtual cut-out. This allowed the robot to home in on the human and receive commands without being distracted by other objects in the space. The second advance involved the depth-imaging camera. The team used a CSEM Swiss Ranger, which uses infrared light to detect objects and to establish distances between the camera and the target object, and, just as important, to measure the distance between the camera and any other objects in the area. The distinction is key, because it enabled the Brown robot to stay locked in on the human commander, which was essential to maintaining a set distance while following the person. The result is a robot that doesn't require remote control or constant vigilance, which is a key step to developing autonomous devices. The team hopes to add more nonverbal and verbal commands for the robot and to increase the three-foot working distance between the commander and the robot.

More information:

http://www.sciencedaily.com/releases/2009/03/090311085058.htm