30 July 2010

A Smoother Street View

New street-level imaging software developed by Microsoft could help people find locations more quickly on the Web. The software could also leave new space for online advertising. Services like Google Street View and Bing Streetside instantly teleport Web surfers to any street corner from Tucson to Tokyo. However, the panoramic photos these services offer provide only a limited perspective. You can't travel smoothly down a street.

Instead, you have to jump from one panoramic ‘bubble’ to the next--not the ideal way to identify a specific address or explore a new neighborhood. Microsoft researchers have come up with a refinement to Bing Streetside called Street Slide. It combines slices from multiple panoramas captured along a stretch of road into one continuous view. This can be viewed from a distance, or ‘smooth scrolled’ sideways.

More information:


28 July 2010

Synchronizing Brain's Neural Activity

The rhythmic electric fields generated by the brain during deep sleep and other periods of intensely coordinated neural activity could amplify and synchronize actions along the same neural networks that initially created those fields, according to a new study. The finding indicates that the brain's electric fields are not just passive by-products of neural activity—they might provide feedback that regulates how the brain functions, especially during deep, or slow-wave, sleep. Although similar ideas have been considered for decades, this is the first direct evidence that the electric fields generated by the cerebral cortex change the behavior of the neurons that engender them. The brain is an intricate network of individual nerve cells, or neurons, that use electrical and chemical signals to communicate with one another. Every time an electrical impulse, or action potential, races down the branch of a neuron, a tiny electric field surrounds that cell. Researchers created an experimental model that mimicked what might happen in the intact brain of a living animal. First, they suspended a slice of brain tissue from the visual cortex of a ferret in artificial cerebrospinal fluid. The living cortical tissue behaved as though the ferret brain were in slow-wave (non–rapid eye movement), sleep, during which the brain produces sluggish but highly synchronous waves of electrical activity. The team's next step was to find out what would happen to the neural activity in the brain slice when it was subjected to a weak electric field.

They surrounded the cortical sample with an electric field that approximated the size and polarity of the fields produced by an intact ferret brain during slow-wave sleep to create an exaggerated version of the exact feedback loop they were investigating. Essentially, they enveloped the brain slice in an echo of itself. When the team applied this electric field echo, they found it amplified and synchronized the neural activity in the brain slice. The field didn't create disorder—it increased harmony. The roar of the brain slice became louder and more regular. Not only did the researchers show that this positive feedback facilitated the synchronous slow waves of electrical activity in the slice of ferret brain, they also showed that an electric field of the same strength, but opposite polarity, disrupted its synchronous neural activity. The new study faces a couple methodological imperfections: First, the simple and uniform electric field created by electrodes in the laboratory does not perfectly mimic the complexity of electric fields generated by a living brain. Second, the experimental model relied on an incredibly thin slice of neural tissue—hardly the same as an intact brain. Researchers say these flaws are unlikely to change the general conclusions of the study, however, because the underlying mechanisms of electrical activity remain consistent enough between the lab model and a living organism.

More information:


26 July 2010

Superimposing Images of History

Superimposing a historic photo on an up-to-date snap of the same scene is a neat way to bring history to life, as the website historypin.com demonstrates. If you want to take a modern photo that will contrast effectively with its historical counterpart, though, you need to ensure it is taken from the same spot, and with the same zoom level. If you don't, the combined picture ends up looking disjointed, with roofs, walls and roads poorly matched. Help is at hand, however, in the form of new software for digital cameras that helps people get their shot-framing spot on.

Researchers at the Massachusetts Institute of Technology in Boston collaborated with Adobe Systems in San Jose, California, and turned to a technique called visual homing to come up with an answer. Visual homing is used in robotics to send a machine to a precise location, such as a charging station. The team's software runs on a laptop linked to a digital camera. The software compares the camera's view to a preloaded historical scene and provides instructions to adjust the camera's position and zoom to best match the scene.

More information:


22 July 2010

Predicting Human Visual Attention

Scientists have just come several steps closer to understanding change blindness -- the well studied failure of humans to detect seemingly obvious changes to scenes around them -- with new research that used a computer-based model to predict what types of changes people are more likely to notice. This is one of the first applications of computer intelligence to help study human visual intelligence, researchers from Queen Mary, University of London mentioned. The biologically inspired mathematics we have developed and tested can have future uses in letting computer vision systems such as robots detect interesting elements in their visual environment. During the study, participants were asked to spot the differences between pre-change and post-change versions of a series of pictures.

Some of these pictures had elements added, removed or color altered, with the location of the change based on attention grabbing properties (known as the salience level). Unlike previous research where scientists studied change blindness by manually manipulating such pictures and making decisions about what and where to make a change, the computer model used in this study eliminated any human bias. The research team at Queen Mary's School of Electronic Engineering and Computer Science developed an algorithm that let the computer decide how to change the images that study participants were asked to view. Tests also showed that the addition or removal of an object from the scene is detected more readily than changes in the color of the object, a result that surprised the scientists.

More information:


20 July 2010

Mixed Reality Cookbook

What we perceive in the world is highly influenced by what we are looking for. European researchers have used this theory to create a convincing and engaging mixed reality, and they have put together a cookbook so others can do it, too. That is new news. In a famous experiment, a group of volunteers observed a video of two teams, one dressed in black and one in white, passing a ball between them. The volunteers had to count the number of times the ball was passed directly from one player in black to another player in black. They performed the task excellently. What they failed to notice was the man in the gorilla suit who walked on screen and jumped up and down during the game. It proved that what you see is strongly influenced by what you are looking for. In ophthalmology, researchers have found the eye does not see everything you perceive; neural processing fills in parts of the scene by inferring from those bits that are observed. In quantum physics, researchers discovered that particles change behaviour depending on whether you are looking at them or not. In field after field researchers have discovered that perception is not linear; it is fuzzy; and it can be strongly influenced by carefully choosing the right cues. The cues do not necessarily require complex technology. The Wii, a very popular gaming platform, abandoned the arms race of ever-more powerful processors and graphics cards and instead incorporated a simple motion sensor. Now users' gestures and reflexes drive the game, changing the pastime from a solitary, passive experience into an active, social one. Those two additions, sociability and physicality, dramatically enhance the sense of experienced reality engendered by the game. Up to now technologies, such as virtual and mixed reality, were thought by most to rely on more power, more technologically advanced interfaces, more animation and textures; but it now seems mixed reality is more powerfully and realistically evoked by combining perceptual dimensions with novel technologies in order to create a greater depth of experience. In IPCity, an EU-funded mixed reality project, researchers studied dozens of technologies to find those that dramatically enhance a user’s experience of a given task, all in an effort to increase citizens’ participation in civic life.

Using virtual experiences (or V-Ex if you want) like this to bring citizens closer to the city, the project embarked on what is probably the largest concerted effort, looking at the widest variety of mixed reality implementations, in recent times. The project created applications for town planning, gaming, environmental awareness and storytelling. It enhanced engagement with the social, cultural and historical fabric of a city through location awareness and mapping, and it developed social storytelling rooted at locations within the streetscape. Using a combination of easy-to-understand yet state-of-the-art technologies and location sensing, the researchers were able to create convincing cross-reality experiences by engaging multiple senses in parallel. The project took perceptual and mixed reality research out of the lab and into the real world with a combination of large-scale field trials and longitudinal studies. As a result, the IPCity team has developed cookbook-like guidelines for creating mixed reality experiences. Take Urban Renewal, an urban redesign application. Here, the researchers used a wide variety of media and interfaces to engage citizens in an exercise for redesigning an urban space. IPCity’s Colour Table is a particularly innovative interface, using tokens to represent elements within a scene, such as buildings or other objects. An overhead camera projected the design table onto a wall, revealing changes as they developed from a bird’s eye view. Another camera ‘interprets’ the tokens and projects virtual mock-ups onto a backdrop of the real site. Meanwhile on a screen, users can see how they have arranged the tokens, and on another they see how that would impact the real landscape. The entire set-up, along with other tools, is part of a mobile tent that is transported to the actual location for the new building, so participants can visualise the real-world environment. The combination of these technologies, along with subtle audio streams, evokes a very convincing air of engagement in the task.

More information:


10 July 2010

Reach Out & Touch Virtual Reality

European researchers have virtually teleported real objects through cyberspace, touched things in virtual reality and even felt the movements of a virtual dance partner. It sounds like science fiction, but advances in haptic technology and a new approach to generating VR content are helping to create virtual experiences that are far more realistic and immersive than anything achieved before. Not only do users see and hear their virtual surroundings, objects and avatars, but they can touch them as well, paving the way for new applications in telepresence, telemedicine, industrial design, gaming and entertainment.

Nine universities and research institutes are developing technology to make VR objects and characters touchable. With funding from the EU in the Immersence project, they developed innovative haptic and multi-modal interfaces, new signal processing techniques and a pioneering method to generate VR objects from real-world objects in real time. The researchers also worked on techniques that would allow a user to feel different textures and sense the stiffness of an object, enabling them to differentiate between a hard box, a soft fluffy frog or even a liquid.

More information:


07 July 2010

TV & Video Games Reduce Attention

Parents looking to get their kid's attention -- or keeping them focused at home and in the classroom -- should try to limit their television viewing and video game play. That's because a new study led by three Iowa State University psychologists has found that both viewing television and playing video games are associated with increased attention problems in youths. The research, which included both elementary school-age and college-age participants, found that children who exceeded the two hours per day of screen time recommended by the American Academy of Pediatrics were 1.5 to 2 times more likely to be above average in attention problems. The researchers assessed 1,323 children in third, fourth and fifth grades over 13 months, using reports from the parents and children about their video game and television habits, as well as teacher reports of attention problems.

Another group of 210 college students provided self-reports of television habits, video game exposure and attention problems. Previous research had associated television viewing with attention problems in children. The new study also found similar effects from the amount of time spent with video games. The study showed that the effect was similar in magnitude between video games and TV viewing. Based on the study's findings, researchers conclude that TV and video game viewing may be one contributing factor for attention deficit hyperactivity disorder (ADHD) in children. Swing points out that the associations between attention problems and TV and video game exposure are significant, but small. The researchers plan to continue studying the effects of screen time on attention. They also hope future research can identify what aspects of television or video games may be most relevant to attention problems.

More information:


06 July 2010

A Pacemaker for Your Brain

By stimulating certain areas of the brain, scientists can alleviate the effects of disorders such as depression or Parkinson's disease. That's the good news. But because controlling that stimulation currently lacks precision, over-stimulation is a serious concern — losing some of its therapeutic benefits for the patient over time. Now a Tel Aviv University team, part of a European consortium, is delving deep into human behavior, neurophysiology and engineering to create a chip that can help doctors wire computer applications and sensors to the brain. The chip will provide deep brain stimulation precisely where and when it's needed. Researchers are working toward a chip that could help treat some diseases of the mind in just a few years. The platform is flexible enough to provide a basis for a variety of clinical experiments, and tools which can be programmed for specific disorders. For example, the chip could restore lost functions of the brain after a traumatic brain injury from a car accident or stroke. The team's methodology is straightforward — they record activity using electrodes implanted in diseased areas of the brain. Based on an analysis of this activity, they develop algorithms to simulate healthy neuronal activity which are programmed into a microchip and fed back into the brain. For now, the chip, called the Rehabilitation Nano Chip (or ReNaChip), is hooked up to tiny electrodes which are implanted in the brain.

But as chips become smaller, the ReNaChip could be made small enough to be ‘etched’ right onto the electrodes themselves. For therapeutic purposes, though, only the electrodes will be inserted into the brain. The chip itself can be implanted just under the skin, like pacemakers for the heart ensuring that the brain is stimulated only when it needs to be. One of the challenges of the proposed technology is the size of the electrodes. The researchers hope to further miniaturize deep brain electrodes while adding more sensors at the same time. The idea that a chip can interface between inputs and outputs of certain brain area is a very new concept in scientific circles although movies and TV shows about bionic humans have been part of the popular culture for decades. The researchers say that their ReNaChip could help people whose brains have deteriorated with age or been damaged by injury and disease. The chip will not only provide a bionic replacement for lost neuronal function in the brain, under ideal conditions, it could significantly rehabilitate the brain. Currently, the researchers are attempting to rehabilitate motor-learning functions lost due to brain damage. A controlled treatment for drug resistant epilepsy, based on the team's technology, could be only a few years away.

More information: