24 February 2011

Robots Learn Human Perception

Newborn babies have a strong grip. They have strong grasp reflexes, which is evident when they grab your finger for example - but that is about all they can do. A two-year-old child, however, is already an expert when it comes to grasping and has dozens of gripping variations. For instance, they can gently lift objects and hold a spoon. Small children can competently move round angular and pointed objects in their hands, and they are also capable of abstraction. They can recognise angular objects as angular objects and round objects as round objects, regardless of whether the object has three, four or five corners or curves – and regardless of whether this is the first time they have seen the object. It is this abstraction ability that is still missing from the brain of a computer today. Human beings analyse their environment within fractions of a second researchers from Max Planck Institute state. All we need to do is glance at a surface to know whether it is slippery or not. A computer has to carry out extensive calculations before it can disentangle the input from its sensors and identify what something is. The human brain, however, picks a few basic characteristics from the storm of sensory stimuli it receives and comes to an accurate conclusion about the nature of the world around us. Although a technical system can process thousands of data, figures and measurement values and analyse the atomic structure of a floor tile – yet a robot would probably still slip on a floor that has been freshly mopped.

Researchers developed statistical computing processes, so-called estimators, which reduce the complexity of environmental stimuli to a required extent, just like the brain. Thanks to these estimators, the computer does not get lost in the massive volume of data. With this procedure, they are gradually approximating the environment. Black is focusing primarily on vision, on movements in particular, as these are especially strong stimuli for the human brain. From the jumble of light reflexes, shadows and roaming pixels of a film sequence, computing processes can now extract objects that have been moved – just not as swiftly or as simply as the brain. Medical researchers in the US planted tiny electrodes in the brains of paraplegic patients in the areas of the brain responsible for movement – the motor cortex. They then analysed the stimulation of the nerve cells. Nerve cells send out extremely fine electrical impulses when they are stimulated, and the electrodes detect these extremely fine electric shocks. Such electrical stimulation initially does not look much different to a noisy television screen. Max Planck researchers have succeeded in identifying and interpreting clear activation samples from this flickering. The computer was able to translate the thoughts of the patients into real movements: simply through the power of thought, the patients could move the cursor on a computer monitor. These links between the brain and computer are called brain-computer interfaces by experts.

More information:

http://www.mpg.de/1171331/Michael_Black?filter_order=LT&research_topic=BM-NB

22 February 2011

Multitasking with BCI Machines

Brain-machine interfaces make gains by learning about their users, letting them rest, and allowing for multitasking. You may have heard of virtual keyboards controlled by thought, brain-powered wheelchairs, and neuro-prosthetic limbs. Once the mind is trained to send the right kind of signals, operating the interface can be downright tiring for the mind - a fact that prevents the technology from being of much use to people with disabilities, among others. Researchers at the EPFL have a solution: engineer the system so that it learns about its user, allows for periods of rest, and even multitasking.

In a typical brain-computer interface (BCI) set-up, users can send one of three commands – left, right, or no-command. No-command is necessary for a brain-powered wheelchair to continue going straight, for example, or to stay put in front of a specific target. Paradoxically, in order for the wheelchair or small robot to continue on its way it needs constant input, and this ‘no-command’ is very taxing to maintain and requires extreme concentration. After about an hour, most users are spent. Not much help if you need to maneuver that wheelchair through an airport.

More information:

http://actu.epfl.ch/news/at-aaas-2011-taking-brain-computer-interfaces-to-t/

20 February 2011

Virtually Feeling Fat

Greasy food and a lack of exercise aren't the only things that can make you feel fat -- now you can add virtual reality and being poked by a stick to the list. By having people wear head-mounted displays that make them see pot-bellied computer-generated versions of their bodies and by having them poke their tummies with sticks at the same time, scientists found they could make people experience the illusion of having a fat paunch. Such research is more than just an elaborate parlor trick -- it could help people who feel uncomfortable in their own bodies. Virtual reality is typically thought of as a way to manipulate where people feel they are. However, computer scientists at ICREA-University of Barcelona and University College London also find it's a way to tinker with how people view their own bodies. For instance, they previously discovered that they can make a virtual arm feel as if it were attached to a person’s body and even make men feel as if their bodies were female. All these illusions depend on jabbing a person's real arm or body while at the same time simulating these pokes on that participant’s virtual reality counterpart.

They do not even require virtual reality -- research over the past decade has shown that a person can feel as if a rubber hand is part of his or her own body, if a real hand of his or hers that researchers have hidden from view is patted at the same time that they see the fake one get tapped. This ‘rubber hand illusion’ can even apply to objects that bear no resemblance to body parts. When scientists put adhesive bandages on both tables and people's real hands, stroke both simultaneously and then partially rip the bandages off only the tables, many people winced and some even reported feeling pain. To further explore self-perception, researchers developed a one-person rig where 22 volunteers could tap their own bellies with a stick. At the same time, participants wore virtual reality goggles displaying virtual rods poking much larger simulated bellies. In experiments, the subjects heard music with a complex, irregular rhythm through headphones for four minutes and were told to pat their bellies in time with the beat. When the taps that volunteers felt were synchronized with the pokes they saw their virtual bodies receive, on average they reported that their bodies felt bigger than normal.

More information:

http://www.insidescience.org/research/virtually-feeling-fat

17 February 2011

3D Films On Cell Phone

Researchers at Fraunhofer have combined the new mobile radio standard LTE-Advanced with a video coding technique. This puts 3D films on your cell phone. Halting page loading and postage stamp sized-videos jiggling all over the screen – those days are gone for good thanks to Smartphones, flat rates and fast data links. Last year, 100 million videos were seen on YouTube with cell phones all over the world. A survey of the high-tech association BITKOM found that 10 million people surf the Internet with their cell phones in Germany. And there’s another hype that is unbroken: 3D films. Researchers at the Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institut, HHI in Berlin, Germany, have been able to put both of them together so you can experience mobile Internet in 3D.

The researchers have come up with a special compression technique for films in especially good high-resolution HD quality. It computes the films down to low data rates while maintaining quality: H.264/AVC. What the H.246/AVC video format is to high-definition films, the Multiview Video Coding (MVC) is to 3-D films. Scientists at the HHI, explained that MVC is used to pack together the two images needed for the stereoscopic 3D effect to measurably reduce the film’s bit rate and this technique can be used to reduce the size of 3D films as much as 40 percent. That means that you can quickly receive excellent quality 3D films in connection with the new 3G-LTE mobile radio standard. Key is the radio resource management integrated into the LTE system that allows flexible data transmission while including various quality of service classes.

More information:

http://www.fraunhofer.de/en/press/research-news/2010-2011/14/3d-films-on-cell-phone.jsp

15 February 2011

Ancient Buildings From Historic Maps

Software that recognises the outlines of buildings shown on historic maps should make it easier to create digital reconstructions of long-lost cities. It could also lead to virtual museum exhibits of historic locations. The conventional method for digitising paper maps involves the labour-intensive process of tracing over buildings by hand. Now a team of researchers from the University of East Anglia in Norwich, UK, has developed software to do this. On maps where buildings are shown in characteristic colours, the software is fully automatic. It first detects blocks of colour on a scan of the map, and then highlights the edge of each block to generate a clear outline of the building. It will also work with black and white maps if the user clicks a point inside each building.

The automatic mode is between 10 and 100 times faster than tracing the outlines by hand. Even with black and white maps, it is at least twice as fast. One problem when working with old maps is that the scale can be seriously distorted. The software can correct for this by overlaying the building outlines on an accurately surveyed modern map. The extracted outlines can be imported into a commercial software package called CityEngine that generates 3D images with the help of information about what buildings from the period in question would have looked like. Researchers suggest that museum curators might use the software to add interactive tours of historic locations to their exhibits.

More information:

http://www.newscientist.com/article/mg20927986.000-ancient-buildings-brought-to-life-from-historic-maps.html

08 February 2011

Virtual Cosmetic Surgery

For some plastic surgery patients, expectations are unrealistically high. Basing their hopes on the before-and-after albums offered in surgeons' offices, they expect to achieve a perfect body or to look just like a favourite celeb. But those albums only show how someone else's liposuction, breast augmentation, or Beyonce bum enhancement turned out. Now Tel Aviv University researchers are developing software based on real clinical data to give patients a more accurate before-and-after picture before the scalpel comes down. Tackling a very difficult mathematical problem in computer modelling called predicting ‘deformations’ of non-rigid objects, researchers have built a tool that can generate an anatomically accurate after-surgery image. With the help of experienced plastic surgeons, the tool can work like a engine to retrieve geometric objects in the same manner Google retrieves web pages. It helps patients avoid unexpected results in the plastic surgeon's office, and can also help a surgeon determine the most favourable outcome for the patient.

Current image-prediction software only generates 2D images, and its processing power is limited to relatively simple image processing programs like Photoshop. The prototype gives surgeons and their patients a way to see a 3D before-and-after image as though the patient has really undergone the operation. For this application, the researchers applied data from past plastic surgery patients and considered a number of variables, such as the patients' ages and different tissue types. Researchers designed the program with the help of numerous pre- and post-surgery images fed into a computer to ‘teach’ it to more accurately generate post-surgery images. Now under commercial development, the software will not only show women and men a much more accurate outcome, but also help surgeons achieve more favourable results for their clientele. A significant challenge was creating an algorithm that could generate a 3D image from a 2D picture. Today's photographic equipment can ‘see’ and represent the human body from only one angle.

More information:

http://www.aftau.org/site/News2?page=NewsArticle&id=13831

06 February 2011

Gesture Recognition Robotic Nurse

Surgeons of the future might use a system that recognizes hand gestures as commands to control a robotic scrub nurse or tell a computer to display medical images of the patient during an operation. Both the hand-gesture recognition and robotic nurse innovations might help to reduce the length of surgeries and the potential for infection, researchers from Purdue University mentioned. The vision-based hand gesture recognition technology could have other applications, including the coordination of emergency response activities during disasters.

Surgeons routinely need to review medical images and records during surgery, but stepping away from the operating table and touching a keyboard and mouse can delay the surgery and increase the risk of spreading infection-causing bacteria. The new approach is a system that uses a camera and specialized algorithms to recognize hand gestures as commands to instruct a computer or robot. At the same time, a robotic scrub nurse represents a potential new tool that might improve operating-room efficiency.

More information:

http://www.purdue.edu/newsroom/research/2011/110203WachsGestures.html

01 February 2011

A Clearer Picture of Vision

The human retina — the part of the eye that converts incoming light into electrochemical signals — has about 100 million light-sensitive cells. So retinal images contain a huge amount of data. High-level visual-processing tasks — like object recognition, gauging size and distance, or calculating the trajectory of a moving object — couldn’t possibly preserve all that data: The brain just doesn’t have enough neurons. So vision scientists have long assumed that the brain must somehow summarize the content of retinal images, reducing their informational load before passing them on to higher-order processes. At the Society of Photo-Optical Instrumentation Engineers’ Human Vision and Electronic Imaging conference research scientists from the Department of Brain and Cognitive Sciences, presented a new mathematical model of how the brain does that summarizing. The model accurately predicts the visual system’s failure on certain types of image-processing tasks, a good indication that it captures some aspect of human cognition.

Most models of human object recognition assume that the first thing the brain does with a retinal image is identify edges — boundaries between regions with different light-reflective properties — and sort them according to alignment: horizontal, vertical and diagonal. Then, the story goes, the brain starts assembling these features into primitive shapes, registering, for instance, that in some part of the visual field, a horizontal feature appears above a vertical feature, or two diagonals cross each other. From these primitive shapes, it builds up more complex shapes — four L’s with different orientations, for instance, would make a square — and so on, until it’s constructed shapes that it can identify as features of known objects. While this might be a good model of what happens at the center of the visual field, researchers argue that it’s probably less applicable to the periphery, where human object discrimination is notoriously weak.

More information:

http://web.mit.edu/newsoffice/2011/vision-coding-0128.html