31 March 2015

Virtual Nose Eliminates Simulation Sickness in VR

Simulator sickness or simulation sickness is a common phenomenon in virtual reality games. While playing these games, people sometimes face nausea and vertigo. These side effects are stopping virtual reality to become a mainstream technology. However, new findings have come up which could ease this problem. Simulator sickness is caused by many physiological systems like a person’s overall sense of position and touch, muscles controlling eye-movements and liquid-filled tubes in the ear.


Studies have suggested that simulation sickness is not as severe when fixed visual reference objects are included in the games, like an airplane’s cockpit or a car’s dashboard, which are situated within the point of view of the users. Keeping this finding in mind, the idea of inserting a virtual nose in the VR games, struck. Researchers from Purdue University found out that this helped in reducing motion sickness in VR games.

More information:

25 March 2015

After Learning New Words, Brain Sees Them As Pictures

When we look at a known word, our brain sees it like a picture, not a group of letters needing to be processed. That's the finding from a Georgetown University Medical Center (GUMC) study which shows the brain learns words quickly by tuning neurons to respond to a complete word, not parts of it. Neurons respond differently to real words, such as turf, than to nonsense words, such as turt, showing that a small area of the brain is holistically tuned to recognize complete words. People are not recognizing words by quickly spelling them out or identifying parts of words, as some researchers have suggested. Instead, neurons in a small brain area remember how the whole word looks—using what could be called a visual dictionary.


This small area in the brain is found in the left side of the visual cortex, opposite from the fusiform face area on the right side, which remembers how faces look. One area is selective for a whole face, allowing us to quickly recognize people, and the other is selective for a whole word, which helps us read quickly. The study asked 25 adult participants to learn a set of 150 nonsense words. The brain plasticity associated with learning was investigated with fMRI-rapid adaptation, both before and after training. The investigators found that the visual word form area changed as the participants learned the nonsense words. Before training the neurons responded like the training words were nonsense words, but after training the neurons responded to the learned words like they were real words.

More information:

24 March 2015

Sulon Cortex HMD

The Sulon Cortex essentially turns your surrounding environment into an augmented or virtual reality experience thanks to the sensors on the headset that are spatially-aware of your surroundings. Unlike the Oculus Rift which requires a PC, Sony's Project Morpheus which needs a PS4 and the Samsung Gear VR which requires a Galaxy Note 4 phone, the final build of the Cortex promises an great experience so you can walk around the scene you're in.


The HMD can work both indoors and outdoors and supposedly doesn't need extra cameras or sensors nor is it affected by ambient light because it can spatially map environments in real time. There are also two different types of HMD's. The wired headset has a large, eight-balls sensor on the back, which also makes you look even weirder than usual for VR/AR devices. You can customize the colors of the hexagonal lights on the sides to blue, green, pink and purple.

More information:

17 March 2015

Real-Time Holographic Displays

Researchers from the University of Cambridge have designed a new type of pixel element and demonstrated its unique switching capability, which could make three-dimensional holographic displays possible. Real-time dynamic holographic displays, long the realm of science fiction, could be one step closer to reality, after researchers from the University of Cambridge developed a new type of pixel element that enables far greater control over displays at the level of individual pixels. As opposed to a photograph, a hologram is created when light bounces off a sheet of material with grooves in just the right places to project an image away from the surface. When looking at a hologram from within this artificially-generated light field, the viewer gets the same visual impression as if the object was directly in front of them.



Currently, the development of holographic displays is limited by technology that can allow control of all the properties of light at the level of individual pixels. A hologram encodes a large amount of optical information, and a dynamic representation of a holographic image requires vast amounts of information to be modulated on a display device. A relatively large area exists in which additional functionality can be added through the patterning of nanostructures to increase the capacity of pixels in order to make them suitable for holographic displays. Normally, devices which use plasmonic optical antennas are passive, which is essential for real-world applications. Through integration with liquid crystals, in the form of typical pixel architecture, the researchers were able to actively switch which hologram is excited and there which output image is selected.

More information:

16 March 2015

Visual Turing Test

Computers are able to recognize objects in photographs and other images, but how well can they understand the relationships or implied activities between objects? Researchers have devised a method of evaluating how well computers perform at that task. Researchers from Brown and Johns Hopkins universities have come up with a new way to evaluate how well computers can divine information from images. The team describes its new system as a "visual Turing test," after the legendary computer scientist Alan Turing's test of the extent to which computers display human-like intelligence. Traditional computer vision benchmarks tend to measure an algorithm's performance in detecting objects within an image, or how well a system identifies an image's global attributes.


To be able to recognize that the image depicts two people walking together and having a conversation is a much deeper understanding. Describing an image as depicting a person entering a building is a richer understanding than saying it contains a person and a building. The system is designed to test for such a contextual understanding of photos. It works by generating a string of yes or no questions about an image, which are posed sequentially to the system being tested. Each question is progressively more in-depth and based on the responses to the questions that have come before. The first version of the test was generated based on a set of photos depicting urban street scenes. But the concept could conceivably be expanded to all kinds of photos, the researchers say.

More information: