Many works of science fiction
have imagined robots that could interact directly with people to provide
entertainment, services or even health care. Robotics is now at a stage where
some of these ideas can be realized, but it remains difficult to make robots
easy to operate. One option is to train robots to recognize and respond to
human gestures. In practice, however, this is difficult because a simple
gesture such as waving a hand may appear very different between different
people. Designers must develop intelligent computer algorithms that can be
‘trained’ to identify general patterns of motion and relate them correctly to
individual commands.
Researchers at the A*STAR
Institute for Infocomm Research in Singapore have adapted a cognitive memory
model called a localist attractor network (LAN) to develop a new system that
recognize gestures quickly and accurately, and requires very little training.
They tested their software by integrating it with ShapeTape, a special jacket
that uses fibre optics and inertial sensors to monitor the bending and twisting
of hands and arms. They programmed the ShapeTape to provide data 80 times per
second on the three-dimensional orientation of shoulders, elbows and wrists,
and applied velocity thresholds to detect when gestures were starting.
More information: