16 September 2008

Watch And Learn

In work that could aid efforts to develop more brain-like computer vision systems, MIT neuroscientists have tricked the visual brain into confusing one object with another, thereby demonstrating that time teaches us how to recognize objects. It may sound strange, but human eyes never see the same image twice. An object such as a cat can produce innumerable impressions on the retina, depending on the direction of gaze, angle of view, distance and so forth. Every time our eyes move, the pattern of neural activity changes, yet our perception of the cat remains stable. This stability, which is called 'invariance,' is fundamental to our ability to recognize objects, it feels effortless, but it is a central challenge for computational neuroscience. A possible explanation is suggested by the fact that our eyes tend to move rapidly (about three times per second), whereas physical objects usually change more slowly. Therefore, differing patterns of activity in rapid succession often reflect different images of the same object. Could the brain take advantage of this simple rule of thumb to learn object invariance?

In this study, monkeys watch a similarly altered world while recording from neurons in the inferior temporal (IT) cortex — a high-level visual brain area where object invariance is thought to arise. IT neurons "prefer" certain objects and respond to them regardless of where they appear within the visual field. After the monkeys spent time in this altered world, their IT neurons became confused, just like the previous human subjects. The sailboat neuron, for example, still preferred sailboats at all locations — except at the swap location, where it learned to prefer teacups. The longer the manipulation, the greater the confusion, exactly as predicted by the temporal contiguity hypothesis. Importantly, just as human infants can learn to see without adult supervision, the monkeys received no feedback from the researchers. Instead, the changes in their brain occurred spontaneously as the monkeys looked freely around the computer screen. The team is now testing this idea further using computer vision systems viewing real-world videos. This work was funded by the NIH, the McKnight Endowment Fund for Neuroscience and a gift from Marjorie and Gerald Burnett.

More information:

http://www.sciencedaily.com/releases/2008/09/080911150046.htm