Researchers from Brown University
and MIT have developed a method for helping robots plan for multi-step tasks by
constructing abstract representations of the world around them. Their study is
a step toward building robots that can think and act more like people. For the
study, the researchers introduced a robot named Anathema Device (or Ana, for
short) to a room containing a cupboard, a cooler, a switch that controls a
light inside the cupboard, and a bottle that could be left in either the cooler
or the cupboard. They gave Ana a set of high-level motor skills for
manipulating the objects in the room -- opening and closing both the cooler and
the cupboard, flipping the switch and picking up a bottle. Then they turned Ana
loose to try out her motor skills in the room, recording the sensory data from
her cameras and actuators before and after each skill execution. Those data
were fed into the machine-learning algorithm developed by the team.
The researchers showed that Ana
was able to learn a very abstract description of the environment that contained
only what was necessary for her to be able perform a particular skill. For
example, she learned that in order to open the cooler, she needed to be
standing in front of it and not holding anything (because she needed both hands
to open the lid). She also learned the proper configuration of pixels in her
visual field associated with the cooler lid being closed, which is the only
configuration in which it's possible to open it. She learned similar
abstractions associated with her other skills. She learned, for example, that
the light inside cupboard was so bright that it whited out her sensors. So in
order to manipulate the bottle inside the cupboard, the light had to be off.
She also learned that in order to turn the light off, the cupboard door needed
to be closed, because the open door blocked her access to the switch.
More information: