24 February 2011

Robots Learn Human Perception

Newborn babies have a strong grip. They have strong grasp reflexes, which is evident when they grab your finger for example - but that is about all they can do. A two-year-old child, however, is already an expert when it comes to grasping and has dozens of gripping variations. For instance, they can gently lift objects and hold a spoon. Small children can competently move round angular and pointed objects in their hands, and they are also capable of abstraction. They can recognise angular objects as angular objects and round objects as round objects, regardless of whether the object has three, four or five corners or curves – and regardless of whether this is the first time they have seen the object. It is this abstraction ability that is still missing from the brain of a computer today. Human beings analyse their environment within fractions of a second researchers from Max Planck Institute state. All we need to do is glance at a surface to know whether it is slippery or not. A computer has to carry out extensive calculations before it can disentangle the input from its sensors and identify what something is. The human brain, however, picks a few basic characteristics from the storm of sensory stimuli it receives and comes to an accurate conclusion about the nature of the world around us. Although a technical system can process thousands of data, figures and measurement values and analyse the atomic structure of a floor tile – yet a robot would probably still slip on a floor that has been freshly mopped.

Researchers developed statistical computing processes, so-called estimators, which reduce the complexity of environmental stimuli to a required extent, just like the brain. Thanks to these estimators, the computer does not get lost in the massive volume of data. With this procedure, they are gradually approximating the environment. Black is focusing primarily on vision, on movements in particular, as these are especially strong stimuli for the human brain. From the jumble of light reflexes, shadows and roaming pixels of a film sequence, computing processes can now extract objects that have been moved – just not as swiftly or as simply as the brain. Medical researchers in the US planted tiny electrodes in the brains of paraplegic patients in the areas of the brain responsible for movement – the motor cortex. They then analysed the stimulation of the nerve cells. Nerve cells send out extremely fine electrical impulses when they are stimulated, and the electrodes detect these extremely fine electric shocks. Such electrical stimulation initially does not look much different to a noisy television screen. Max Planck researchers have succeeded in identifying and interpreting clear activation samples from this flickering. The computer was able to translate the thoughts of the patients into real movements: simply through the power of thought, the patients could move the cursor on a computer monitor. These links between the brain and computer are called brain-computer interfaces by experts.

More information:

http://www.mpg.de/1171331/Michael_Black?filter_order=LT&research_topic=BM-NB