Sooner than you think, we may
have robots to tidy up our homes. Researchers in Cornell's Personal Robotics
Lab have trained a robot to survey a room, identify all the objects, figure out
where they belong and put them away. Previous work has dealt with placing single
objects on a flat surface. Now researchers are looking at a group of objects,
and this is the first work that places objects in non-trivial places. The new
algorithms allow the robot to consider the nature of an object in deciding what
to do with it. The researchers tested placing dishes, books, clothing and toys
on tables and in bookshelves, dish racks, refrigerators and closets. The robot
was up to 98 percent successful in identifying and placing objects it had seen
before. It was able to place objects it had never seen before, but success
rates fell to an average of 80 percent. Ambiguously shaped objects, such as
clothing and shoes, were most often misidentified. The robot begins by
surveying the room with a Microsoft Kinect 3D camera, originally made for video
gaming but now being widely used by robotics researchers. Many images are
stitched together to create an overall view of the room, which the robot's
computer divides into blocks based on discontinuities of color and shape. The
robot has been shown several examples of each kind of object and learns what
characteristics they have in common.
For each block it computes the
probability of a match with each object in its database and chooses the most
likely match. For each object the robot then examines the target area to decide
on an appropriate and stable placement. Again it divides a 3D image of the
target space into small chunks and computes a series of features of each chunk,
taking into account the shape of the object it's placing. The researchers train
the robot for this task by feeding it graphic simulations in which placement
sites are labeled as good and bad, and it builds a model of what good placement
sites have in common. It chooses the chunk of space with the closest fit to
that model. Finally the robot creates a graphic simulation of how to move the
object to its final location and carries out those movements. These are
practical applications of computer graphics far removed from gaming and animating
movie monsters. A robot with a success rate less than 100 percent would still
break an occasional dish. Performance could be improved, the researchers say,
with cameras that provide higher-resolution images, and by pre-programming the
robot with 3D models of the objects it is going to handle, rather than leaving
it to create its own model from what it sees. The robot sees only part of a
real object, so a bowl could look the same as a globe. Tactile feedback from the
robot's hand would also help it to know when the object is in a stable position
and can be released.
More information: